2026-02-08 02:31:17.828846 | Job console starting 2026-02-08 02:31:17.838055 | Updating git repos 2026-02-08 02:31:17.905281 | Cloning repos into workspace 2026-02-08 02:31:18.133721 | Restoring repo states 2026-02-08 02:31:18.154279 | Merging changes 2026-02-08 02:31:18.154303 | Checking out repos 2026-02-08 02:31:18.437588 | Preparing playbooks 2026-02-08 02:31:19.101104 | Running Ansible setup 2026-02-08 02:31:23.530638 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-08 02:31:24.280370 | 2026-02-08 02:31:24.280528 | PLAY [Base pre] 2026-02-08 02:31:24.297365 | 2026-02-08 02:31:24.297498 | TASK [Setup log path fact] 2026-02-08 02:31:24.327600 | orchestrator | ok 2026-02-08 02:31:24.344854 | 2026-02-08 02:31:24.344985 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-08 02:31:24.374362 | orchestrator | ok 2026-02-08 02:31:24.386222 | 2026-02-08 02:31:24.386327 | TASK [emit-job-header : Print job information] 2026-02-08 02:31:24.432785 | # Job Information 2026-02-08 02:31:24.433039 | Ansible Version: 2.16.14 2026-02-08 02:31:24.433098 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-08 02:31:24.433153 | Pipeline: periodic-midnight 2026-02-08 02:31:24.433189 | Executor: 521e9411259a 2026-02-08 02:31:24.433221 | Triggered by: https://github.com/osism/testbed 2026-02-08 02:31:24.433255 | Event ID: 49d0ce891e154dbba1cbfc958544a323 2026-02-08 02:31:24.443028 | 2026-02-08 02:31:24.443168 | LOOP [emit-job-header : Print node information] 2026-02-08 02:31:24.575869 | orchestrator | ok: 2026-02-08 02:31:24.576200 | orchestrator | # Node Information 2026-02-08 02:31:24.576258 | orchestrator | Inventory Hostname: orchestrator 2026-02-08 02:31:24.576297 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-08 02:31:24.576332 | orchestrator | Username: zuul-testbed03 2026-02-08 02:31:24.576363 | orchestrator | Distro: Debian 12.13 2026-02-08 02:31:24.576397 | orchestrator | Provider: static-testbed 2026-02-08 02:31:24.576428 | orchestrator | Region: 2026-02-08 02:31:24.576460 | orchestrator | Label: testbed-orchestrator 2026-02-08 02:31:24.576489 | orchestrator | Product Name: OpenStack Nova 2026-02-08 02:31:24.576518 | orchestrator | Interface IP: 81.163.193.140 2026-02-08 02:31:24.597664 | 2026-02-08 02:31:24.597795 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-08 02:31:25.066613 | orchestrator -> localhost | changed 2026-02-08 02:31:25.075469 | 2026-02-08 02:31:25.075610 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-08 02:31:26.142661 | orchestrator -> localhost | changed 2026-02-08 02:31:26.157914 | 2026-02-08 02:31:26.158056 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-08 02:31:26.434700 | orchestrator -> localhost | ok 2026-02-08 02:31:26.442121 | 2026-02-08 02:31:26.442243 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-08 02:31:26.462238 | orchestrator | ok 2026-02-08 02:31:26.478892 | orchestrator | included: /var/lib/zuul/builds/c9ca016b06f14a8483ea3b09e15b25d8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-08 02:31:26.487960 | 2026-02-08 02:31:26.488071 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-08 02:31:28.282614 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-08 02:31:28.282904 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c9ca016b06f14a8483ea3b09e15b25d8/work/c9ca016b06f14a8483ea3b09e15b25d8_id_rsa 2026-02-08 02:31:28.282957 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c9ca016b06f14a8483ea3b09e15b25d8/work/c9ca016b06f14a8483ea3b09e15b25d8_id_rsa.pub 2026-02-08 02:31:28.282987 | orchestrator -> localhost | The key fingerprint is: 2026-02-08 02:31:28.283013 | orchestrator -> localhost | SHA256:QK1eORfTV1O4/ExrqXCDDCuOrYbtrKuv4wX7qxOkhfY zuul-build-sshkey 2026-02-08 02:31:28.283036 | orchestrator -> localhost | The key's randomart image is: 2026-02-08 02:31:28.283073 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-08 02:31:28.283096 | orchestrator -> localhost | | .. . ++| 2026-02-08 02:31:28.283120 | orchestrator -> localhost | | . . o . o .| 2026-02-08 02:31:28.283142 | orchestrator -> localhost | | . .. . o o . | 2026-02-08 02:31:28.283163 | orchestrator -> localhost | |.o. ..+.. o .| 2026-02-08 02:31:28.283184 | orchestrator -> localhost | |+o. . .So+ . +o| 2026-02-08 02:31:28.283210 | orchestrator -> localhost | |..oE .. . + o +o| 2026-02-08 02:31:28.283231 | orchestrator -> localhost | | ...o + . o + | 2026-02-08 02:31:28.283251 | orchestrator -> localhost | | oo..+ o . | 2026-02-08 02:31:28.283273 | orchestrator -> localhost | |.=B**+. | 2026-02-08 02:31:28.283294 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-08 02:31:28.283356 | orchestrator -> localhost | ok: Runtime: 0:00:01.296932 2026-02-08 02:31:28.291831 | 2026-02-08 02:31:28.291964 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-08 02:31:28.329581 | orchestrator | ok 2026-02-08 02:31:28.344208 | orchestrator | included: /var/lib/zuul/builds/c9ca016b06f14a8483ea3b09e15b25d8/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-08 02:31:28.353747 | 2026-02-08 02:31:28.353846 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-08 02:31:28.377467 | orchestrator | skipping: Conditional result was False 2026-02-08 02:31:28.386074 | 2026-02-08 02:31:28.386177 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-08 02:31:28.943454 | orchestrator | changed 2026-02-08 02:31:28.950056 | 2026-02-08 02:31:28.950166 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-08 02:31:29.219971 | orchestrator | ok 2026-02-08 02:31:29.229964 | 2026-02-08 02:31:29.230118 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-08 02:31:29.684394 | orchestrator | ok 2026-02-08 02:31:29.692659 | 2026-02-08 02:31:29.692781 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-08 02:31:30.098915 | orchestrator | ok 2026-02-08 02:31:30.107979 | 2026-02-08 02:31:30.108106 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-08 02:31:30.132778 | orchestrator | skipping: Conditional result was False 2026-02-08 02:31:30.144737 | 2026-02-08 02:31:30.144879 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-08 02:31:30.601511 | orchestrator -> localhost | changed 2026-02-08 02:31:30.626745 | 2026-02-08 02:31:30.626924 | TASK [add-build-sshkey : Add back temp key] 2026-02-08 02:31:30.965686 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c9ca016b06f14a8483ea3b09e15b25d8/work/c9ca016b06f14a8483ea3b09e15b25d8_id_rsa (zuul-build-sshkey) 2026-02-08 02:31:30.965946 | orchestrator -> localhost | ok: Runtime: 0:00:00.015486 2026-02-08 02:31:30.973268 | 2026-02-08 02:31:30.973377 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-08 02:31:31.406625 | orchestrator | ok 2026-02-08 02:31:31.414788 | 2026-02-08 02:31:31.414958 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-08 02:31:31.449403 | orchestrator | skipping: Conditional result was False 2026-02-08 02:31:31.507929 | 2026-02-08 02:31:31.508070 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-08 02:31:31.896628 | orchestrator | ok 2026-02-08 02:31:31.910249 | 2026-02-08 02:31:31.910378 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-08 02:31:31.955970 | orchestrator | ok 2026-02-08 02:31:31.967095 | 2026-02-08 02:31:31.967237 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-08 02:31:32.256125 | orchestrator -> localhost | ok 2026-02-08 02:31:32.271752 | 2026-02-08 02:31:32.271909 | TASK [validate-host : Collect information about the host] 2026-02-08 02:31:33.488650 | orchestrator | ok 2026-02-08 02:31:33.506675 | 2026-02-08 02:31:33.506811 | TASK [validate-host : Sanitize hostname] 2026-02-08 02:31:33.579952 | orchestrator | ok 2026-02-08 02:31:33.588752 | 2026-02-08 02:31:33.588906 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-08 02:31:34.162384 | orchestrator -> localhost | changed 2026-02-08 02:31:34.170285 | 2026-02-08 02:31:34.170401 | TASK [validate-host : Collect information about zuul worker] 2026-02-08 02:31:34.645534 | orchestrator | ok 2026-02-08 02:31:34.656739 | 2026-02-08 02:31:34.656875 | TASK [validate-host : Write out all zuul information for each host] 2026-02-08 02:31:35.248378 | orchestrator -> localhost | changed 2026-02-08 02:31:35.259512 | 2026-02-08 02:31:35.259647 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-08 02:31:35.533887 | orchestrator | ok 2026-02-08 02:31:35.540800 | 2026-02-08 02:31:35.540928 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-08 02:32:00.042714 | orchestrator | changed: 2026-02-08 02:32:00.043116 | orchestrator | .d..t...... src/ 2026-02-08 02:32:00.043168 | orchestrator | .d..t...... src/github.com/ 2026-02-08 02:32:00.043201 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-08 02:32:00.043230 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-08 02:32:00.043258 | orchestrator | RedHat.yml 2026-02-08 02:32:00.058959 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-08 02:32:00.058976 | orchestrator | RedHat.yml 2026-02-08 02:32:00.059029 | orchestrator | = 1.53.0"... 2026-02-08 02:32:11.545413 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-08 02:32:11.565649 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-08 02:32:12.076399 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-08 02:32:13.690256 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-08 02:32:14.063664 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-08 02:32:14.699021 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-08 02:32:14.758089 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-08 02:32:15.213843 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-08 02:32:15.213923 | orchestrator | 2026-02-08 02:32:15.213931 | orchestrator | Providers are signed by their developers. 2026-02-08 02:32:15.213936 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-08 02:32:15.213948 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-08 02:32:15.213989 | orchestrator | 2026-02-08 02:32:15.213995 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-08 02:32:15.214002 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-08 02:32:15.214057 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-08 02:32:15.214080 | orchestrator | you run "tofu init" in the future. 2026-02-08 02:32:15.214698 | orchestrator | 2026-02-08 02:32:15.214769 | orchestrator | OpenTofu has been successfully initialized! 2026-02-08 02:32:15.214796 | orchestrator | 2026-02-08 02:32:15.214803 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-08 02:32:15.214810 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-08 02:32:15.214817 | orchestrator | should now work. 2026-02-08 02:32:15.214824 | orchestrator | 2026-02-08 02:32:15.214831 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-08 02:32:15.214838 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-08 02:32:15.214856 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-08 02:32:15.393753 | orchestrator | Created and switched to workspace "ci"! 2026-02-08 02:32:15.393831 | orchestrator | 2026-02-08 02:32:15.393846 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-08 02:32:15.393853 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-08 02:32:15.393863 | orchestrator | for this configuration. 2026-02-08 02:32:15.566738 | orchestrator | ci.auto.tfvars 2026-02-08 02:32:15.571829 | orchestrator | default_custom.tf 2026-02-08 02:32:16.629758 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-08 02:32:17.187564 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-08 02:32:17.437549 | orchestrator | 2026-02-08 02:32:17.437619 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-08 02:32:17.437627 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-08 02:32:17.437652 | orchestrator | + create 2026-02-08 02:32:17.437667 | orchestrator | <= read (data resources) 2026-02-08 02:32:17.437680 | orchestrator | 2026-02-08 02:32:17.437684 | orchestrator | OpenTofu will perform the following actions: 2026-02-08 02:32:17.437791 | orchestrator | 2026-02-08 02:32:17.437805 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-08 02:32:17.437809 | orchestrator | # (config refers to values not yet known) 2026-02-08 02:32:17.437814 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-08 02:32:17.437818 | orchestrator | + checksum = (known after apply) 2026-02-08 02:32:17.437822 | orchestrator | + created_at = (known after apply) 2026-02-08 02:32:17.437826 | orchestrator | + file = (known after apply) 2026-02-08 02:32:17.437830 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.437852 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.437857 | orchestrator | + min_disk_gb = (known after apply) 2026-02-08 02:32:17.437861 | orchestrator | + min_ram_mb = (known after apply) 2026-02-08 02:32:17.437865 | orchestrator | + most_recent = true 2026-02-08 02:32:17.437869 | orchestrator | + name = (known after apply) 2026-02-08 02:32:17.437872 | orchestrator | + protected = (known after apply) 2026-02-08 02:32:17.437876 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.437883 | orchestrator | + schema = (known after apply) 2026-02-08 02:32:17.437887 | orchestrator | + size_bytes = (known after apply) 2026-02-08 02:32:17.437891 | orchestrator | + tags = (known after apply) 2026-02-08 02:32:17.437895 | orchestrator | + updated_at = (known after apply) 2026-02-08 02:32:17.437899 | orchestrator | } 2026-02-08 02:32:17.437978 | orchestrator | 2026-02-08 02:32:17.437989 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-08 02:32:17.437994 | orchestrator | # (config refers to values not yet known) 2026-02-08 02:32:17.437998 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-08 02:32:17.438002 | orchestrator | + checksum = (known after apply) 2026-02-08 02:32:17.438006 | orchestrator | + created_at = (known after apply) 2026-02-08 02:32:17.438009 | orchestrator | + file = (known after apply) 2026-02-08 02:32:17.438034 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.438037 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.438041 | orchestrator | + min_disk_gb = (known after apply) 2026-02-08 02:32:17.438045 | orchestrator | + min_ram_mb = (known after apply) 2026-02-08 02:32:17.438049 | orchestrator | + most_recent = true 2026-02-08 02:32:17.438054 | orchestrator | + name = (known after apply) 2026-02-08 02:32:17.438058 | orchestrator | + protected = (known after apply) 2026-02-08 02:32:17.438061 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.438065 | orchestrator | + schema = (known after apply) 2026-02-08 02:32:17.438124 | orchestrator | + size_bytes = (known after apply) 2026-02-08 02:32:17.438129 | orchestrator | + tags = (known after apply) 2026-02-08 02:32:17.438133 | orchestrator | + updated_at = (known after apply) 2026-02-08 02:32:17.438137 | orchestrator | } 2026-02-08 02:32:17.438221 | orchestrator | 2026-02-08 02:32:17.438233 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-08 02:32:17.438238 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-08 02:32:17.438242 | orchestrator | + content = (known after apply) 2026-02-08 02:32:17.438247 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-08 02:32:17.438251 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-08 02:32:17.438255 | orchestrator | + content_md5 = (known after apply) 2026-02-08 02:32:17.438259 | orchestrator | + content_sha1 = (known after apply) 2026-02-08 02:32:17.438263 | orchestrator | + content_sha256 = (known after apply) 2026-02-08 02:32:17.438267 | orchestrator | + content_sha512 = (known after apply) 2026-02-08 02:32:17.438270 | orchestrator | + directory_permission = "0777" 2026-02-08 02:32:17.438274 | orchestrator | + file_permission = "0644" 2026-02-08 02:32:17.438292 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-08 02:32:17.438296 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.438300 | orchestrator | } 2026-02-08 02:32:17.438369 | orchestrator | 2026-02-08 02:32:17.438381 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-08 02:32:17.438385 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-08 02:32:17.438389 | orchestrator | + content = (known after apply) 2026-02-08 02:32:17.438393 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-08 02:32:17.438397 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-08 02:32:17.438400 | orchestrator | + content_md5 = (known after apply) 2026-02-08 02:32:17.438404 | orchestrator | + content_sha1 = (known after apply) 2026-02-08 02:32:17.438408 | orchestrator | + content_sha256 = (known after apply) 2026-02-08 02:32:17.438412 | orchestrator | + content_sha512 = (known after apply) 2026-02-08 02:32:17.438416 | orchestrator | + directory_permission = "0777" 2026-02-08 02:32:17.438420 | orchestrator | + file_permission = "0644" 2026-02-08 02:32:17.438430 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-08 02:32:17.438434 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.438438 | orchestrator | } 2026-02-08 02:32:17.438507 | orchestrator | 2026-02-08 02:32:17.438524 | orchestrator | # local_file.inventory will be created 2026-02-08 02:32:17.438528 | orchestrator | + resource "local_file" "inventory" { 2026-02-08 02:32:17.438532 | orchestrator | + content = (known after apply) 2026-02-08 02:32:17.438536 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-08 02:32:17.438540 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-08 02:32:17.438543 | orchestrator | + content_md5 = (known after apply) 2026-02-08 02:32:17.438547 | orchestrator | + content_sha1 = (known after apply) 2026-02-08 02:32:17.438552 | orchestrator | + content_sha256 = (known after apply) 2026-02-08 02:32:17.438555 | orchestrator | + content_sha512 = (known after apply) 2026-02-08 02:32:17.438559 | orchestrator | + directory_permission = "0777" 2026-02-08 02:32:17.438563 | orchestrator | + file_permission = "0644" 2026-02-08 02:32:17.438567 | orchestrator | + filename = "inventory.ci" 2026-02-08 02:32:17.438571 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.438575 | orchestrator | } 2026-02-08 02:32:17.438641 | orchestrator | 2026-02-08 02:32:17.438652 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-08 02:32:17.438657 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-08 02:32:17.438661 | orchestrator | + content = (sensitive value) 2026-02-08 02:32:17.438665 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-08 02:32:17.438669 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-08 02:32:17.438672 | orchestrator | + content_md5 = (known after apply) 2026-02-08 02:32:17.438676 | orchestrator | + content_sha1 = (known after apply) 2026-02-08 02:32:17.438680 | orchestrator | + content_sha256 = (known after apply) 2026-02-08 02:32:17.438684 | orchestrator | + content_sha512 = (known after apply) 2026-02-08 02:32:17.438688 | orchestrator | + directory_permission = "0700" 2026-02-08 02:32:17.438691 | orchestrator | + file_permission = "0600" 2026-02-08 02:32:17.438695 | orchestrator | + filename = ".id_rsa.ci" 2026-02-08 02:32:17.438699 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.438703 | orchestrator | } 2026-02-08 02:32:17.438725 | orchestrator | 2026-02-08 02:32:17.438736 | orchestrator | # null_resource.node_semaphore will be created 2026-02-08 02:32:17.438740 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-08 02:32:17.438744 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.438748 | orchestrator | } 2026-02-08 02:32:17.438816 | orchestrator | 2026-02-08 02:32:17.438827 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-08 02:32:17.438832 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-08 02:32:17.438836 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.438840 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.438843 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.438847 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.438851 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.438855 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-08 02:32:17.438859 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.438862 | orchestrator | + size = 80 2026-02-08 02:32:17.438866 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.438870 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.438874 | orchestrator | } 2026-02-08 02:32:17.438937 | orchestrator | 2026-02-08 02:32:17.438949 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-08 02:32:17.438953 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-08 02:32:17.438957 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.438960 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.438964 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.438972 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.438976 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.438980 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-08 02:32:17.438983 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.438987 | orchestrator | + size = 80 2026-02-08 02:32:17.438991 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.438995 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.438999 | orchestrator | } 2026-02-08 02:32:17.439059 | orchestrator | 2026-02-08 02:32:17.439070 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-08 02:32:17.439074 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-08 02:32:17.439078 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.439082 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.439086 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.439089 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.439093 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.439097 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-08 02:32:17.439101 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.439105 | orchestrator | + size = 80 2026-02-08 02:32:17.439108 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.439112 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.439116 | orchestrator | } 2026-02-08 02:32:17.439176 | orchestrator | 2026-02-08 02:32:17.439187 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-08 02:32:17.439191 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-08 02:32:17.439195 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.439199 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.439202 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.439206 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.439210 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.439214 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-08 02:32:17.439218 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.439221 | orchestrator | + size = 80 2026-02-08 02:32:17.439225 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.439229 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.439233 | orchestrator | } 2026-02-08 02:32:17.439307 | orchestrator | 2026-02-08 02:32:17.439319 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-08 02:32:17.439323 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-08 02:32:17.439327 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.439331 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.439335 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.439338 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.439342 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.439349 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-08 02:32:17.439353 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.439357 | orchestrator | + size = 80 2026-02-08 02:32:17.439361 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.439365 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.439368 | orchestrator | } 2026-02-08 02:32:17.439427 | orchestrator | 2026-02-08 02:32:17.439439 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-08 02:32:17.439443 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-08 02:32:17.439447 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.439451 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.439455 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.439463 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.439467 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.439471 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-08 02:32:17.439475 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.439479 | orchestrator | + size = 80 2026-02-08 02:32:17.439482 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.439486 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.439490 | orchestrator | } 2026-02-08 02:32:17.439550 | orchestrator | 2026-02-08 02:32:17.439561 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-08 02:32:17.439565 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-08 02:32:17.439569 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.439573 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.439577 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.439581 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.439585 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.439588 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-08 02:32:17.439592 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.439596 | orchestrator | + size = 80 2026-02-08 02:32:17.439600 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.439604 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.439607 | orchestrator | } 2026-02-08 02:32:17.439664 | orchestrator | 2026-02-08 02:32:17.439675 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-08 02:32:17.439680 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-08 02:32:17.439684 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.439688 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.439692 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.439695 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.439700 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-08 02:32:17.439704 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.439707 | orchestrator | + size = 20 2026-02-08 02:32:17.439711 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.439715 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.439719 | orchestrator | } 2026-02-08 02:32:17.439775 | orchestrator | 2026-02-08 02:32:17.439786 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-08 02:32:17.439790 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-08 02:32:17.439794 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.439798 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.439801 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.439805 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.439809 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-08 02:32:17.439813 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.439817 | orchestrator | + size = 20 2026-02-08 02:32:17.439820 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.439824 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.439828 | orchestrator | } 2026-02-08 02:32:17.439887 | orchestrator | 2026-02-08 02:32:17.439897 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-08 02:32:17.439902 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-08 02:32:17.439906 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.439909 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.439913 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.439917 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.439921 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-08 02:32:17.439925 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.439932 | orchestrator | + size = 20 2026-02-08 02:32:17.439936 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.439940 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.439944 | orchestrator | } 2026-02-08 02:32:17.439998 | orchestrator | 2026-02-08 02:32:17.440009 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-08 02:32:17.440014 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-08 02:32:17.440018 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.440021 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.440025 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.440029 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.440033 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-08 02:32:17.440037 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.440040 | orchestrator | + size = 20 2026-02-08 02:32:17.440044 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.440048 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.440052 | orchestrator | } 2026-02-08 02:32:17.440106 | orchestrator | 2026-02-08 02:32:17.440117 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-08 02:32:17.440121 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-08 02:32:17.440125 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.440129 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.440133 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.440137 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.440141 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-08 02:32:17.440145 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.440152 | orchestrator | + size = 20 2026-02-08 02:32:17.440156 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.440159 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.440163 | orchestrator | } 2026-02-08 02:32:17.440221 | orchestrator | 2026-02-08 02:32:17.440232 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-08 02:32:17.440236 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-08 02:32:17.440240 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.440244 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.440248 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.440252 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.440256 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-08 02:32:17.440259 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.440263 | orchestrator | + size = 20 2026-02-08 02:32:17.440267 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.440271 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.440275 | orchestrator | } 2026-02-08 02:32:17.440449 | orchestrator | 2026-02-08 02:32:17.440471 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-08 02:32:17.440477 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-08 02:32:17.440483 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.440489 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.440494 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.440500 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.440506 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-08 02:32:17.440511 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.440517 | orchestrator | + size = 20 2026-02-08 02:32:17.440522 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.440527 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.440533 | orchestrator | } 2026-02-08 02:32:17.440606 | orchestrator | 2026-02-08 02:32:17.440618 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-08 02:32:17.440623 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-08 02:32:17.440636 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.440640 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.440644 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.440647 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.440651 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-08 02:32:17.440655 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.440659 | orchestrator | + size = 20 2026-02-08 02:32:17.440663 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.440667 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.440671 | orchestrator | } 2026-02-08 02:32:17.440731 | orchestrator | 2026-02-08 02:32:17.440742 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-08 02:32:17.440746 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-08 02:32:17.440750 | orchestrator | + attachment = (known after apply) 2026-02-08 02:32:17.440754 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.440758 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.440761 | orchestrator | + metadata = (known after apply) 2026-02-08 02:32:17.440765 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-08 02:32:17.440769 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.440773 | orchestrator | + size = 20 2026-02-08 02:32:17.440776 | orchestrator | + volume_retype_policy = "never" 2026-02-08 02:32:17.440780 | orchestrator | + volume_type = "ssd" 2026-02-08 02:32:17.440784 | orchestrator | } 2026-02-08 02:32:17.440974 | orchestrator | 2026-02-08 02:32:17.440986 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-08 02:32:17.440990 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-08 02:32:17.440994 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-08 02:32:17.440998 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-08 02:32:17.441002 | orchestrator | + all_metadata = (known after apply) 2026-02-08 02:32:17.441006 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.441010 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.441013 | orchestrator | + config_drive = true 2026-02-08 02:32:17.441017 | orchestrator | + created = (known after apply) 2026-02-08 02:32:17.441021 | orchestrator | + flavor_id = (known after apply) 2026-02-08 02:32:17.441025 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-08 02:32:17.441028 | orchestrator | + force_delete = false 2026-02-08 02:32:17.441032 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-08 02:32:17.441036 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.441040 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.441043 | orchestrator | + image_name = (known after apply) 2026-02-08 02:32:17.441047 | orchestrator | + key_pair = "testbed" 2026-02-08 02:32:17.441051 | orchestrator | + name = "testbed-manager" 2026-02-08 02:32:17.441055 | orchestrator | + power_state = "active" 2026-02-08 02:32:17.441059 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.441062 | orchestrator | + security_groups = (known after apply) 2026-02-08 02:32:17.441066 | orchestrator | + stop_before_destroy = false 2026-02-08 02:32:17.441070 | orchestrator | + updated = (known after apply) 2026-02-08 02:32:17.441074 | orchestrator | + user_data = (sensitive value) 2026-02-08 02:32:17.441077 | orchestrator | 2026-02-08 02:32:17.441082 | orchestrator | + block_device { 2026-02-08 02:32:17.441085 | orchestrator | + boot_index = 0 2026-02-08 02:32:17.441091 | orchestrator | + delete_on_termination = false 2026-02-08 02:32:17.441103 | orchestrator | + destination_type = "volume" 2026-02-08 02:32:17.441109 | orchestrator | + multiattach = false 2026-02-08 02:32:17.441114 | orchestrator | + source_type = "volume" 2026-02-08 02:32:17.441121 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.441133 | orchestrator | } 2026-02-08 02:32:17.441139 | orchestrator | 2026-02-08 02:32:17.441145 | orchestrator | + network { 2026-02-08 02:32:17.441151 | orchestrator | + access_network = false 2026-02-08 02:32:17.441157 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-08 02:32:17.441164 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-08 02:32:17.441170 | orchestrator | + mac = (known after apply) 2026-02-08 02:32:17.441177 | orchestrator | + name = (known after apply) 2026-02-08 02:32:17.441183 | orchestrator | + port = (known after apply) 2026-02-08 02:32:17.441189 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.441195 | orchestrator | } 2026-02-08 02:32:17.441202 | orchestrator | } 2026-02-08 02:32:17.441541 | orchestrator | 2026-02-08 02:32:17.441570 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-08 02:32:17.441575 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-08 02:32:17.441579 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-08 02:32:17.441583 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-08 02:32:17.441587 | orchestrator | + all_metadata = (known after apply) 2026-02-08 02:32:17.441591 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.441595 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.441599 | orchestrator | + config_drive = true 2026-02-08 02:32:17.441603 | orchestrator | + created = (known after apply) 2026-02-08 02:32:17.441607 | orchestrator | + flavor_id = (known after apply) 2026-02-08 02:32:17.441611 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-08 02:32:17.441615 | orchestrator | + force_delete = false 2026-02-08 02:32:17.441619 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-08 02:32:17.441622 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.441626 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.441630 | orchestrator | + image_name = (known after apply) 2026-02-08 02:32:17.441634 | orchestrator | + key_pair = "testbed" 2026-02-08 02:32:17.441638 | orchestrator | + name = "testbed-node-0" 2026-02-08 02:32:17.441641 | orchestrator | + power_state = "active" 2026-02-08 02:32:17.441645 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.441649 | orchestrator | + security_groups = (known after apply) 2026-02-08 02:32:17.441652 | orchestrator | + stop_before_destroy = false 2026-02-08 02:32:17.441656 | orchestrator | + updated = (known after apply) 2026-02-08 02:32:17.441660 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-08 02:32:17.441664 | orchestrator | 2026-02-08 02:32:17.441668 | orchestrator | + block_device { 2026-02-08 02:32:17.441672 | orchestrator | + boot_index = 0 2026-02-08 02:32:17.441676 | orchestrator | + delete_on_termination = false 2026-02-08 02:32:17.441680 | orchestrator | + destination_type = "volume" 2026-02-08 02:32:17.441683 | orchestrator | + multiattach = false 2026-02-08 02:32:17.441687 | orchestrator | + source_type = "volume" 2026-02-08 02:32:17.441691 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.441694 | orchestrator | } 2026-02-08 02:32:17.441698 | orchestrator | 2026-02-08 02:32:17.441702 | orchestrator | + network { 2026-02-08 02:32:17.441706 | orchestrator | + access_network = false 2026-02-08 02:32:17.441709 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-08 02:32:17.441714 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-08 02:32:17.441718 | orchestrator | + mac = (known after apply) 2026-02-08 02:32:17.441721 | orchestrator | + name = (known after apply) 2026-02-08 02:32:17.441725 | orchestrator | + port = (known after apply) 2026-02-08 02:32:17.441729 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.441733 | orchestrator | } 2026-02-08 02:32:17.441736 | orchestrator | } 2026-02-08 02:32:17.441927 | orchestrator | 2026-02-08 02:32:17.441938 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-08 02:32:17.441943 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-08 02:32:17.441946 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-08 02:32:17.441957 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-08 02:32:17.441961 | orchestrator | + all_metadata = (known after apply) 2026-02-08 02:32:17.441965 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.441968 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.441972 | orchestrator | + config_drive = true 2026-02-08 02:32:17.441976 | orchestrator | + created = (known after apply) 2026-02-08 02:32:17.441980 | orchestrator | + flavor_id = (known after apply) 2026-02-08 02:32:17.441983 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-08 02:32:17.441987 | orchestrator | + force_delete = false 2026-02-08 02:32:17.441991 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-08 02:32:17.441995 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.441998 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.442002 | orchestrator | + image_name = (known after apply) 2026-02-08 02:32:17.442006 | orchestrator | + key_pair = "testbed" 2026-02-08 02:32:17.442010 | orchestrator | + name = "testbed-node-1" 2026-02-08 02:32:17.442035 | orchestrator | + power_state = "active" 2026-02-08 02:32:17.442039 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.442043 | orchestrator | + security_groups = (known after apply) 2026-02-08 02:32:17.442047 | orchestrator | + stop_before_destroy = false 2026-02-08 02:32:17.442051 | orchestrator | + updated = (known after apply) 2026-02-08 02:32:17.442054 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-08 02:32:17.442058 | orchestrator | 2026-02-08 02:32:17.442062 | orchestrator | + block_device { 2026-02-08 02:32:17.442066 | orchestrator | + boot_index = 0 2026-02-08 02:32:17.442069 | orchestrator | + delete_on_termination = false 2026-02-08 02:32:17.442073 | orchestrator | + destination_type = "volume" 2026-02-08 02:32:17.442077 | orchestrator | + multiattach = false 2026-02-08 02:32:17.442081 | orchestrator | + source_type = "volume" 2026-02-08 02:32:17.442084 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.442088 | orchestrator | } 2026-02-08 02:32:17.442092 | orchestrator | 2026-02-08 02:32:17.442096 | orchestrator | + network { 2026-02-08 02:32:17.442099 | orchestrator | + access_network = false 2026-02-08 02:32:17.442103 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-08 02:32:17.442107 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-08 02:32:17.442111 | orchestrator | + mac = (known after apply) 2026-02-08 02:32:17.442115 | orchestrator | + name = (known after apply) 2026-02-08 02:32:17.442118 | orchestrator | + port = (known after apply) 2026-02-08 02:32:17.442122 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.442126 | orchestrator | } 2026-02-08 02:32:17.442130 | orchestrator | } 2026-02-08 02:32:17.442325 | orchestrator | 2026-02-08 02:32:17.442338 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-08 02:32:17.442342 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-08 02:32:17.442346 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-08 02:32:17.442350 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-08 02:32:17.442355 | orchestrator | + all_metadata = (known after apply) 2026-02-08 02:32:17.442359 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.442368 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.442372 | orchestrator | + config_drive = true 2026-02-08 02:32:17.442376 | orchestrator | + created = (known after apply) 2026-02-08 02:32:17.442379 | orchestrator | + flavor_id = (known after apply) 2026-02-08 02:32:17.442383 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-08 02:32:17.442387 | orchestrator | + force_delete = false 2026-02-08 02:32:17.442391 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-08 02:32:17.442394 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.442398 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.442409 | orchestrator | + image_name = (known after apply) 2026-02-08 02:32:17.442413 | orchestrator | + key_pair = "testbed" 2026-02-08 02:32:17.442417 | orchestrator | + name = "testbed-node-2" 2026-02-08 02:32:17.442420 | orchestrator | + power_state = "active" 2026-02-08 02:32:17.442424 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.442428 | orchestrator | + security_groups = (known after apply) 2026-02-08 02:32:17.442432 | orchestrator | + stop_before_destroy = false 2026-02-08 02:32:17.442435 | orchestrator | + updated = (known after apply) 2026-02-08 02:32:17.442439 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-08 02:32:17.442443 | orchestrator | 2026-02-08 02:32:17.442447 | orchestrator | + block_device { 2026-02-08 02:32:17.442450 | orchestrator | + boot_index = 0 2026-02-08 02:32:17.442454 | orchestrator | + delete_on_termination = false 2026-02-08 02:32:17.442458 | orchestrator | + destination_type = "volume" 2026-02-08 02:32:17.442462 | orchestrator | + multiattach = false 2026-02-08 02:32:17.442466 | orchestrator | + source_type = "volume" 2026-02-08 02:32:17.442469 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.442473 | orchestrator | } 2026-02-08 02:32:17.442477 | orchestrator | 2026-02-08 02:32:17.442481 | orchestrator | + network { 2026-02-08 02:32:17.442484 | orchestrator | + access_network = false 2026-02-08 02:32:17.442488 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-08 02:32:17.442492 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-08 02:32:17.442496 | orchestrator | + mac = (known after apply) 2026-02-08 02:32:17.442500 | orchestrator | + name = (known after apply) 2026-02-08 02:32:17.442503 | orchestrator | + port = (known after apply) 2026-02-08 02:32:17.442507 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.442511 | orchestrator | } 2026-02-08 02:32:17.442515 | orchestrator | } 2026-02-08 02:32:17.442699 | orchestrator | 2026-02-08 02:32:17.442710 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-08 02:32:17.442715 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-08 02:32:17.442718 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-08 02:32:17.442722 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-08 02:32:17.442726 | orchestrator | + all_metadata = (known after apply) 2026-02-08 02:32:17.442730 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.442733 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.442737 | orchestrator | + config_drive = true 2026-02-08 02:32:17.442741 | orchestrator | + created = (known after apply) 2026-02-08 02:32:17.442745 | orchestrator | + flavor_id = (known after apply) 2026-02-08 02:32:17.442748 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-08 02:32:17.442752 | orchestrator | + force_delete = false 2026-02-08 02:32:17.442756 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-08 02:32:17.442760 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.442764 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.442767 | orchestrator | + image_name = (known after apply) 2026-02-08 02:32:17.442771 | orchestrator | + key_pair = "testbed" 2026-02-08 02:32:17.442775 | orchestrator | + name = "testbed-node-3" 2026-02-08 02:32:17.442779 | orchestrator | + power_state = "active" 2026-02-08 02:32:17.442782 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.442786 | orchestrator | + security_groups = (known after apply) 2026-02-08 02:32:17.442790 | orchestrator | + stop_before_destroy = false 2026-02-08 02:32:17.442794 | orchestrator | + updated = (known after apply) 2026-02-08 02:32:17.442798 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-08 02:32:17.442801 | orchestrator | 2026-02-08 02:32:17.442805 | orchestrator | + block_device { 2026-02-08 02:32:17.442811 | orchestrator | + boot_index = 0 2026-02-08 02:32:17.442815 | orchestrator | + delete_on_termination = false 2026-02-08 02:32:17.442819 | orchestrator | + destination_type = "volume" 2026-02-08 02:32:17.442826 | orchestrator | + multiattach = false 2026-02-08 02:32:17.442830 | orchestrator | + source_type = "volume" 2026-02-08 02:32:17.442834 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.442838 | orchestrator | } 2026-02-08 02:32:17.442842 | orchestrator | 2026-02-08 02:32:17.442845 | orchestrator | + network { 2026-02-08 02:32:17.442849 | orchestrator | + access_network = false 2026-02-08 02:32:17.442853 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-08 02:32:17.442856 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-08 02:32:17.442860 | orchestrator | + mac = (known after apply) 2026-02-08 02:32:17.442864 | orchestrator | + name = (known after apply) 2026-02-08 02:32:17.442868 | orchestrator | + port = (known after apply) 2026-02-08 02:32:17.442871 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.442875 | orchestrator | } 2026-02-08 02:32:17.442879 | orchestrator | } 2026-02-08 02:32:17.443060 | orchestrator | 2026-02-08 02:32:17.443071 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-08 02:32:17.443075 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-08 02:32:17.443079 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-08 02:32:17.443083 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-08 02:32:17.443086 | orchestrator | + all_metadata = (known after apply) 2026-02-08 02:32:17.443090 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.443094 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.443098 | orchestrator | + config_drive = true 2026-02-08 02:32:17.443102 | orchestrator | + created = (known after apply) 2026-02-08 02:32:17.443105 | orchestrator | + flavor_id = (known after apply) 2026-02-08 02:32:17.443109 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-08 02:32:17.443113 | orchestrator | + force_delete = false 2026-02-08 02:32:17.443117 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-08 02:32:17.443120 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.443124 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.443128 | orchestrator | + image_name = (known after apply) 2026-02-08 02:32:17.443132 | orchestrator | + key_pair = "testbed" 2026-02-08 02:32:17.443135 | orchestrator | + name = "testbed-node-4" 2026-02-08 02:32:17.443139 | orchestrator | + power_state = "active" 2026-02-08 02:32:17.443143 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.443146 | orchestrator | + security_groups = (known after apply) 2026-02-08 02:32:17.443150 | orchestrator | + stop_before_destroy = false 2026-02-08 02:32:17.443154 | orchestrator | + updated = (known after apply) 2026-02-08 02:32:17.443158 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-08 02:32:17.443162 | orchestrator | 2026-02-08 02:32:17.443165 | orchestrator | + block_device { 2026-02-08 02:32:17.443169 | orchestrator | + boot_index = 0 2026-02-08 02:32:17.443173 | orchestrator | + delete_on_termination = false 2026-02-08 02:32:17.443177 | orchestrator | + destination_type = "volume" 2026-02-08 02:32:17.443180 | orchestrator | + multiattach = false 2026-02-08 02:32:17.443184 | orchestrator | + source_type = "volume" 2026-02-08 02:32:17.443188 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.443192 | orchestrator | } 2026-02-08 02:32:17.443196 | orchestrator | 2026-02-08 02:32:17.443199 | orchestrator | + network { 2026-02-08 02:32:17.443203 | orchestrator | + access_network = false 2026-02-08 02:32:17.443207 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-08 02:32:17.443211 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-08 02:32:17.443214 | orchestrator | + mac = (known after apply) 2026-02-08 02:32:17.443218 | orchestrator | + name = (known after apply) 2026-02-08 02:32:17.443222 | orchestrator | + port = (known after apply) 2026-02-08 02:32:17.443226 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.443229 | orchestrator | } 2026-02-08 02:32:17.443233 | orchestrator | } 2026-02-08 02:32:17.443445 | orchestrator | 2026-02-08 02:32:17.443458 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-08 02:32:17.443462 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-08 02:32:17.443466 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-08 02:32:17.443470 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-08 02:32:17.443474 | orchestrator | + all_metadata = (known after apply) 2026-02-08 02:32:17.443478 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.443481 | orchestrator | + availability_zone = "nova" 2026-02-08 02:32:17.443485 | orchestrator | + config_drive = true 2026-02-08 02:32:17.443489 | orchestrator | + created = (known after apply) 2026-02-08 02:32:17.443493 | orchestrator | + flavor_id = (known after apply) 2026-02-08 02:32:17.443497 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-08 02:32:17.443500 | orchestrator | + force_delete = false 2026-02-08 02:32:17.443507 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-08 02:32:17.443511 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.443515 | orchestrator | + image_id = (known after apply) 2026-02-08 02:32:17.443519 | orchestrator | + image_name = (known after apply) 2026-02-08 02:32:17.443522 | orchestrator | + key_pair = "testbed" 2026-02-08 02:32:17.443526 | orchestrator | + name = "testbed-node-5" 2026-02-08 02:32:17.443530 | orchestrator | + power_state = "active" 2026-02-08 02:32:17.443534 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.443537 | orchestrator | + security_groups = (known after apply) 2026-02-08 02:32:17.443541 | orchestrator | + stop_before_destroy = false 2026-02-08 02:32:17.443545 | orchestrator | + updated = (known after apply) 2026-02-08 02:32:17.443549 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-08 02:32:17.443553 | orchestrator | 2026-02-08 02:32:17.443556 | orchestrator | + block_device { 2026-02-08 02:32:17.443560 | orchestrator | + boot_index = 0 2026-02-08 02:32:17.443564 | orchestrator | + delete_on_termination = false 2026-02-08 02:32:17.443568 | orchestrator | + destination_type = "volume" 2026-02-08 02:32:17.443571 | orchestrator | + multiattach = false 2026-02-08 02:32:17.443575 | orchestrator | + source_type = "volume" 2026-02-08 02:32:17.443579 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.443583 | orchestrator | } 2026-02-08 02:32:17.443586 | orchestrator | 2026-02-08 02:32:17.443590 | orchestrator | + network { 2026-02-08 02:32:17.443594 | orchestrator | + access_network = false 2026-02-08 02:32:17.443598 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-08 02:32:17.443601 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-08 02:32:17.443605 | orchestrator | + mac = (known after apply) 2026-02-08 02:32:17.443609 | orchestrator | + name = (known after apply) 2026-02-08 02:32:17.443613 | orchestrator | + port = (known after apply) 2026-02-08 02:32:17.443617 | orchestrator | + uuid = (known after apply) 2026-02-08 02:32:17.443620 | orchestrator | } 2026-02-08 02:32:17.443624 | orchestrator | } 2026-02-08 02:32:17.443670 | orchestrator | 2026-02-08 02:32:17.443681 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-08 02:32:17.443686 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-08 02:32:17.443689 | orchestrator | + fingerprint = (known after apply) 2026-02-08 02:32:17.443693 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.443697 | orchestrator | + name = "testbed" 2026-02-08 02:32:17.443701 | orchestrator | + private_key = (sensitive value) 2026-02-08 02:32:17.443705 | orchestrator | + public_key = (known after apply) 2026-02-08 02:32:17.443708 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.443712 | orchestrator | + user_id = (known after apply) 2026-02-08 02:32:17.443716 | orchestrator | } 2026-02-08 02:32:17.443753 | orchestrator | 2026-02-08 02:32:17.443764 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-08 02:32:17.443768 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-08 02:32:17.443777 | orchestrator | + device = (known after apply) 2026-02-08 02:32:17.443781 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.443785 | orchestrator | + instance_id = (known after apply) 2026-02-08 02:32:17.443788 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.443792 | orchestrator | + volume_id = (known after apply) 2026-02-08 02:32:17.443796 | orchestrator | } 2026-02-08 02:32:17.443831 | orchestrator | 2026-02-08 02:32:17.443841 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-08 02:32:17.443846 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-08 02:32:17.443850 | orchestrator | + device = (known after apply) 2026-02-08 02:32:17.443853 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.443857 | orchestrator | + instance_id = (known after apply) 2026-02-08 02:32:17.443861 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.443865 | orchestrator | + volume_id = (known after apply) 2026-02-08 02:32:17.443868 | orchestrator | } 2026-02-08 02:32:17.443906 | orchestrator | 2026-02-08 02:32:17.443916 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-08 02:32:17.443921 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-08 02:32:17.443924 | orchestrator | + device = (known after apply) 2026-02-08 02:32:17.443928 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.443932 | orchestrator | + instance_id = (known after apply) 2026-02-08 02:32:17.443936 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.443940 | orchestrator | + volume_id = (known after apply) 2026-02-08 02:32:17.443943 | orchestrator | } 2026-02-08 02:32:17.443980 | orchestrator | 2026-02-08 02:32:17.443990 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-08 02:32:17.443994 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-08 02:32:17.443998 | orchestrator | + device = (known after apply) 2026-02-08 02:32:17.444002 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444006 | orchestrator | + instance_id = (known after apply) 2026-02-08 02:32:17.444010 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.444014 | orchestrator | + volume_id = (known after apply) 2026-02-08 02:32:17.444017 | orchestrator | } 2026-02-08 02:32:17.444049 | orchestrator | 2026-02-08 02:32:17.444059 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-08 02:32:17.444063 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-08 02:32:17.444067 | orchestrator | + device = (known after apply) 2026-02-08 02:32:17.444071 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444075 | orchestrator | + instance_id = (known after apply) 2026-02-08 02:32:17.444081 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.444085 | orchestrator | + volume_id = (known after apply) 2026-02-08 02:32:17.444089 | orchestrator | } 2026-02-08 02:32:17.444124 | orchestrator | 2026-02-08 02:32:17.444134 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-08 02:32:17.444138 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-08 02:32:17.444142 | orchestrator | + device = (known after apply) 2026-02-08 02:32:17.444146 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444150 | orchestrator | + instance_id = (known after apply) 2026-02-08 02:32:17.444154 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.444157 | orchestrator | + volume_id = (known after apply) 2026-02-08 02:32:17.444161 | orchestrator | } 2026-02-08 02:32:17.444194 | orchestrator | 2026-02-08 02:32:17.444205 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-08 02:32:17.444209 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-08 02:32:17.444213 | orchestrator | + device = (known after apply) 2026-02-08 02:32:17.444217 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444221 | orchestrator | + instance_id = (known after apply) 2026-02-08 02:32:17.444224 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.444232 | orchestrator | + volume_id = (known after apply) 2026-02-08 02:32:17.444236 | orchestrator | } 2026-02-08 02:32:17.444271 | orchestrator | 2026-02-08 02:32:17.444292 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-08 02:32:17.444297 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-08 02:32:17.444301 | orchestrator | + device = (known after apply) 2026-02-08 02:32:17.444304 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444308 | orchestrator | + instance_id = (known after apply) 2026-02-08 02:32:17.444312 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.444316 | orchestrator | + volume_id = (known after apply) 2026-02-08 02:32:17.444320 | orchestrator | } 2026-02-08 02:32:17.444352 | orchestrator | 2026-02-08 02:32:17.444362 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-08 02:32:17.444366 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-08 02:32:17.444370 | orchestrator | + device = (known after apply) 2026-02-08 02:32:17.444374 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444378 | orchestrator | + instance_id = (known after apply) 2026-02-08 02:32:17.444382 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.444385 | orchestrator | + volume_id = (known after apply) 2026-02-08 02:32:17.444389 | orchestrator | } 2026-02-08 02:32:17.444426 | orchestrator | 2026-02-08 02:32:17.444440 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-08 02:32:17.444448 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-08 02:32:17.444454 | orchestrator | + fixed_ip = (known after apply) 2026-02-08 02:32:17.444460 | orchestrator | + floating_ip = (known after apply) 2026-02-08 02:32:17.444466 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444471 | orchestrator | + port_id = (known after apply) 2026-02-08 02:32:17.444477 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.444482 | orchestrator | } 2026-02-08 02:32:17.444569 | orchestrator | 2026-02-08 02:32:17.444582 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-08 02:32:17.444586 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-08 02:32:17.444590 | orchestrator | + address = (known after apply) 2026-02-08 02:32:17.444594 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.444597 | orchestrator | + dns_domain = (known after apply) 2026-02-08 02:32:17.444601 | orchestrator | + dns_name = (known after apply) 2026-02-08 02:32:17.444605 | orchestrator | + fixed_ip = (known after apply) 2026-02-08 02:32:17.444608 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444612 | orchestrator | + pool = "public" 2026-02-08 02:32:17.444616 | orchestrator | + port_id = (known after apply) 2026-02-08 02:32:17.444620 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.444624 | orchestrator | + subnet_id = (known after apply) 2026-02-08 02:32:17.444627 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.444631 | orchestrator | } 2026-02-08 02:32:17.444720 | orchestrator | 2026-02-08 02:32:17.444732 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-08 02:32:17.444736 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-08 02:32:17.444740 | orchestrator | + admin_state_up = (known after apply) 2026-02-08 02:32:17.444744 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.444747 | orchestrator | + availability_zone_hints = [ 2026-02-08 02:32:17.444751 | orchestrator | + "nova", 2026-02-08 02:32:17.444755 | orchestrator | ] 2026-02-08 02:32:17.444759 | orchestrator | + dns_domain = (known after apply) 2026-02-08 02:32:17.444763 | orchestrator | + external = (known after apply) 2026-02-08 02:32:17.444767 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444770 | orchestrator | + mtu = (known after apply) 2026-02-08 02:32:17.444774 | orchestrator | + name = "net-testbed-management" 2026-02-08 02:32:17.444778 | orchestrator | + port_security_enabled = (known after apply) 2026-02-08 02:32:17.444787 | orchestrator | + qos_policy_id = (known after apply) 2026-02-08 02:32:17.444791 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.444795 | orchestrator | + shared = (known after apply) 2026-02-08 02:32:17.444799 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.444802 | orchestrator | + transparent_vlan = (known after apply) 2026-02-08 02:32:17.444806 | orchestrator | 2026-02-08 02:32:17.444810 | orchestrator | + segments (known after apply) 2026-02-08 02:32:17.444814 | orchestrator | } 2026-02-08 02:32:17.444934 | orchestrator | 2026-02-08 02:32:17.444945 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-08 02:32:17.444950 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-08 02:32:17.444953 | orchestrator | + admin_state_up = (known after apply) 2026-02-08 02:32:17.444957 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-08 02:32:17.444961 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-08 02:32:17.444971 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.444975 | orchestrator | + device_id = (known after apply) 2026-02-08 02:32:17.444979 | orchestrator | + device_owner = (known after apply) 2026-02-08 02:32:17.444982 | orchestrator | + dns_assignment = (known after apply) 2026-02-08 02:32:17.444986 | orchestrator | + dns_name = (known after apply) 2026-02-08 02:32:17.444990 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.444994 | orchestrator | + mac_address = (known after apply) 2026-02-08 02:32:17.444997 | orchestrator | + network_id = (known after apply) 2026-02-08 02:32:17.445001 | orchestrator | + port_security_enabled = (known after apply) 2026-02-08 02:32:17.445005 | orchestrator | + qos_policy_id = (known after apply) 2026-02-08 02:32:17.445009 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.445012 | orchestrator | + security_group_ids = (known after apply) 2026-02-08 02:32:17.445016 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.445020 | orchestrator | 2026-02-08 02:32:17.445024 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445027 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-08 02:32:17.445031 | orchestrator | } 2026-02-08 02:32:17.445035 | orchestrator | 2026-02-08 02:32:17.445039 | orchestrator | + binding (known after apply) 2026-02-08 02:32:17.445042 | orchestrator | 2026-02-08 02:32:17.445046 | orchestrator | + fixed_ip { 2026-02-08 02:32:17.445050 | orchestrator | + ip_address = "192.168.16.5" 2026-02-08 02:32:17.445054 | orchestrator | + subnet_id = (known after apply) 2026-02-08 02:32:17.445058 | orchestrator | } 2026-02-08 02:32:17.445061 | orchestrator | } 2026-02-08 02:32:17.445195 | orchestrator | 2026-02-08 02:32:17.445206 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-08 02:32:17.445211 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-08 02:32:17.445214 | orchestrator | + admin_state_up = (known after apply) 2026-02-08 02:32:17.445218 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-08 02:32:17.445222 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-08 02:32:17.445226 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.445229 | orchestrator | + device_id = (known after apply) 2026-02-08 02:32:17.445233 | orchestrator | + device_owner = (known after apply) 2026-02-08 02:32:17.445237 | orchestrator | + dns_assignment = (known after apply) 2026-02-08 02:32:17.445241 | orchestrator | + dns_name = (known after apply) 2026-02-08 02:32:17.445244 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.445248 | orchestrator | + mac_address = (known after apply) 2026-02-08 02:32:17.445252 | orchestrator | + network_id = (known after apply) 2026-02-08 02:32:17.445256 | orchestrator | + port_security_enabled = (known after apply) 2026-02-08 02:32:17.445259 | orchestrator | + qos_policy_id = (known after apply) 2026-02-08 02:32:17.445263 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.445271 | orchestrator | + security_group_ids = (known after apply) 2026-02-08 02:32:17.445274 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.445305 | orchestrator | 2026-02-08 02:32:17.445310 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445313 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-08 02:32:17.445317 | orchestrator | } 2026-02-08 02:32:17.445321 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445325 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-08 02:32:17.445329 | orchestrator | } 2026-02-08 02:32:17.445332 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445336 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-08 02:32:17.445340 | orchestrator | } 2026-02-08 02:32:17.445344 | orchestrator | 2026-02-08 02:32:17.445347 | orchestrator | + binding (known after apply) 2026-02-08 02:32:17.445351 | orchestrator | 2026-02-08 02:32:17.445355 | orchestrator | + fixed_ip { 2026-02-08 02:32:17.445359 | orchestrator | + ip_address = "192.168.16.10" 2026-02-08 02:32:17.445362 | orchestrator | + subnet_id = (known after apply) 2026-02-08 02:32:17.445366 | orchestrator | } 2026-02-08 02:32:17.445370 | orchestrator | } 2026-02-08 02:32:17.445523 | orchestrator | 2026-02-08 02:32:17.445537 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-08 02:32:17.445542 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-08 02:32:17.445545 | orchestrator | + admin_state_up = (known after apply) 2026-02-08 02:32:17.445550 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-08 02:32:17.445553 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-08 02:32:17.445557 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.445561 | orchestrator | + device_id = (known after apply) 2026-02-08 02:32:17.445564 | orchestrator | + device_owner = (known after apply) 2026-02-08 02:32:17.445568 | orchestrator | + dns_assignment = (known after apply) 2026-02-08 02:32:17.445572 | orchestrator | + dns_name = (known after apply) 2026-02-08 02:32:17.445576 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.445580 | orchestrator | + mac_address = (known after apply) 2026-02-08 02:32:17.445583 | orchestrator | + network_id = (known after apply) 2026-02-08 02:32:17.445587 | orchestrator | + port_security_enabled = (known after apply) 2026-02-08 02:32:17.445591 | orchestrator | + qos_policy_id = (known after apply) 2026-02-08 02:32:17.445595 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.445598 | orchestrator | + security_group_ids = (known after apply) 2026-02-08 02:32:17.445602 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.445606 | orchestrator | 2026-02-08 02:32:17.445609 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445613 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-08 02:32:17.445617 | orchestrator | } 2026-02-08 02:32:17.445621 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445624 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-08 02:32:17.445628 | orchestrator | } 2026-02-08 02:32:17.445632 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445636 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-08 02:32:17.445639 | orchestrator | } 2026-02-08 02:32:17.445643 | orchestrator | 2026-02-08 02:32:17.445647 | orchestrator | + binding (known after apply) 2026-02-08 02:32:17.445651 | orchestrator | 2026-02-08 02:32:17.445654 | orchestrator | + fixed_ip { 2026-02-08 02:32:17.445658 | orchestrator | + ip_address = "192.168.16.11" 2026-02-08 02:32:17.445662 | orchestrator | + subnet_id = (known after apply) 2026-02-08 02:32:17.445666 | orchestrator | } 2026-02-08 02:32:17.445670 | orchestrator | } 2026-02-08 02:32:17.445814 | orchestrator | 2026-02-08 02:32:17.445825 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-08 02:32:17.445829 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-08 02:32:17.445833 | orchestrator | + admin_state_up = (known after apply) 2026-02-08 02:32:17.445837 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-08 02:32:17.445841 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-08 02:32:17.445845 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.445852 | orchestrator | + device_id = (known after apply) 2026-02-08 02:32:17.445856 | orchestrator | + device_owner = (known after apply) 2026-02-08 02:32:17.445860 | orchestrator | + dns_assignment = (known after apply) 2026-02-08 02:32:17.445864 | orchestrator | + dns_name = (known after apply) 2026-02-08 02:32:17.445870 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.445874 | orchestrator | + mac_address = (known after apply) 2026-02-08 02:32:17.445878 | orchestrator | + network_id = (known after apply) 2026-02-08 02:32:17.445882 | orchestrator | + port_security_enabled = (known after apply) 2026-02-08 02:32:17.445885 | orchestrator | + qos_policy_id = (known after apply) 2026-02-08 02:32:17.445889 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.445893 | orchestrator | + security_group_ids = (known after apply) 2026-02-08 02:32:17.445897 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.445900 | orchestrator | 2026-02-08 02:32:17.445904 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445908 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-08 02:32:17.445912 | orchestrator | } 2026-02-08 02:32:17.445916 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445919 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-08 02:32:17.445923 | orchestrator | } 2026-02-08 02:32:17.445927 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.445931 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-08 02:32:17.445935 | orchestrator | } 2026-02-08 02:32:17.445938 | orchestrator | 2026-02-08 02:32:17.445942 | orchestrator | + binding (known after apply) 2026-02-08 02:32:17.445946 | orchestrator | 2026-02-08 02:32:17.445950 | orchestrator | + fixed_ip { 2026-02-08 02:32:17.445953 | orchestrator | + ip_address = "192.168.16.12" 2026-02-08 02:32:17.445957 | orchestrator | + subnet_id = (known after apply) 2026-02-08 02:32:17.445961 | orchestrator | } 2026-02-08 02:32:17.445965 | orchestrator | } 2026-02-08 02:32:17.446118 | orchestrator | 2026-02-08 02:32:17.446131 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-08 02:32:17.446135 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-08 02:32:17.446139 | orchestrator | + admin_state_up = (known after apply) 2026-02-08 02:32:17.446143 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-08 02:32:17.446147 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-08 02:32:17.446150 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.446154 | orchestrator | + device_id = (known after apply) 2026-02-08 02:32:17.446158 | orchestrator | + device_owner = (known after apply) 2026-02-08 02:32:17.446162 | orchestrator | + dns_assignment = (known after apply) 2026-02-08 02:32:17.446165 | orchestrator | + dns_name = (known after apply) 2026-02-08 02:32:17.446169 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.446173 | orchestrator | + mac_address = (known after apply) 2026-02-08 02:32:17.446177 | orchestrator | + network_id = (known after apply) 2026-02-08 02:32:17.446180 | orchestrator | + port_security_enabled = (known after apply) 2026-02-08 02:32:17.446184 | orchestrator | + qos_policy_id = (known after apply) 2026-02-08 02:32:17.446188 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.446192 | orchestrator | + security_group_ids = (known after apply) 2026-02-08 02:32:17.446195 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.446199 | orchestrator | 2026-02-08 02:32:17.446203 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.446207 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-08 02:32:17.446211 | orchestrator | } 2026-02-08 02:32:17.446215 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.446218 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-08 02:32:17.446222 | orchestrator | } 2026-02-08 02:32:17.446226 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.446230 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-08 02:32:17.446234 | orchestrator | } 2026-02-08 02:32:17.446237 | orchestrator | 2026-02-08 02:32:17.446246 | orchestrator | + binding (known after apply) 2026-02-08 02:32:17.446250 | orchestrator | 2026-02-08 02:32:17.446253 | orchestrator | + fixed_ip { 2026-02-08 02:32:17.446257 | orchestrator | + ip_address = "192.168.16.13" 2026-02-08 02:32:17.446261 | orchestrator | + subnet_id = (known after apply) 2026-02-08 02:32:17.446265 | orchestrator | } 2026-02-08 02:32:17.446269 | orchestrator | } 2026-02-08 02:32:17.446453 | orchestrator | 2026-02-08 02:32:17.446467 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-08 02:32:17.446471 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-08 02:32:17.446475 | orchestrator | + admin_state_up = (known after apply) 2026-02-08 02:32:17.446479 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-08 02:32:17.446483 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-08 02:32:17.446487 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.446491 | orchestrator | + device_id = (known after apply) 2026-02-08 02:32:17.446495 | orchestrator | + device_owner = (known after apply) 2026-02-08 02:32:17.446498 | orchestrator | + dns_assignment = (known after apply) 2026-02-08 02:32:17.446502 | orchestrator | + dns_name = (known after apply) 2026-02-08 02:32:17.446506 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.446510 | orchestrator | + mac_address = (known after apply) 2026-02-08 02:32:17.446514 | orchestrator | + network_id = (known after apply) 2026-02-08 02:32:17.446517 | orchestrator | + port_security_enabled = (known after apply) 2026-02-08 02:32:17.446521 | orchestrator | + qos_policy_id = (known after apply) 2026-02-08 02:32:17.446525 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.446529 | orchestrator | + security_group_ids = (known after apply) 2026-02-08 02:32:17.446533 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.446537 | orchestrator | 2026-02-08 02:32:17.446541 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.446545 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-08 02:32:17.446549 | orchestrator | } 2026-02-08 02:32:17.446553 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.446556 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-08 02:32:17.446560 | orchestrator | } 2026-02-08 02:32:17.446564 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.446568 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-08 02:32:17.446572 | orchestrator | } 2026-02-08 02:32:17.446575 | orchestrator | 2026-02-08 02:32:17.446579 | orchestrator | + binding (known after apply) 2026-02-08 02:32:17.446583 | orchestrator | 2026-02-08 02:32:17.446587 | orchestrator | + fixed_ip { 2026-02-08 02:32:17.446591 | orchestrator | + ip_address = "192.168.16.14" 2026-02-08 02:32:17.446594 | orchestrator | + subnet_id = (known after apply) 2026-02-08 02:32:17.446598 | orchestrator | } 2026-02-08 02:32:17.446602 | orchestrator | } 2026-02-08 02:32:17.446737 | orchestrator | 2026-02-08 02:32:17.446749 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-08 02:32:17.446753 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-08 02:32:17.446757 | orchestrator | + admin_state_up = (known after apply) 2026-02-08 02:32:17.446761 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-08 02:32:17.446764 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-08 02:32:17.446768 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.446772 | orchestrator | + device_id = (known after apply) 2026-02-08 02:32:17.446776 | orchestrator | + device_owner = (known after apply) 2026-02-08 02:32:17.446780 | orchestrator | + dns_assignment = (known after apply) 2026-02-08 02:32:17.446783 | orchestrator | + dns_name = (known after apply) 2026-02-08 02:32:17.446787 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.446791 | orchestrator | + mac_address = (known after apply) 2026-02-08 02:32:17.446795 | orchestrator | + network_id = (known after apply) 2026-02-08 02:32:17.446799 | orchestrator | + port_security_enabled = (known after apply) 2026-02-08 02:32:17.446802 | orchestrator | + qos_policy_id = (known after apply) 2026-02-08 02:32:17.446810 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.446814 | orchestrator | + security_group_ids = (known after apply) 2026-02-08 02:32:17.446818 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.446822 | orchestrator | 2026-02-08 02:32:17.446826 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.446830 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-08 02:32:17.446833 | orchestrator | } 2026-02-08 02:32:17.446837 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.446841 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-08 02:32:17.446845 | orchestrator | } 2026-02-08 02:32:17.446849 | orchestrator | + allowed_address_pairs { 2026-02-08 02:32:17.446852 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-08 02:32:17.446856 | orchestrator | } 2026-02-08 02:32:17.446860 | orchestrator | 2026-02-08 02:32:17.446867 | orchestrator | + binding (known after apply) 2026-02-08 02:32:17.446871 | orchestrator | 2026-02-08 02:32:17.446875 | orchestrator | + fixed_ip { 2026-02-08 02:32:17.446879 | orchestrator | + ip_address = "192.168.16.15" 2026-02-08 02:32:17.446883 | orchestrator | + subnet_id = (known after apply) 2026-02-08 02:32:17.446886 | orchestrator | } 2026-02-08 02:32:17.446890 | orchestrator | } 2026-02-08 02:32:17.446932 | orchestrator | 2026-02-08 02:32:17.446943 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-08 02:32:17.446947 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-08 02:32:17.446951 | orchestrator | + force_destroy = false 2026-02-08 02:32:17.446955 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.446959 | orchestrator | + port_id = (known after apply) 2026-02-08 02:32:17.446962 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.446966 | orchestrator | + router_id = (known after apply) 2026-02-08 02:32:17.446970 | orchestrator | + subnet_id = (known after apply) 2026-02-08 02:32:17.446974 | orchestrator | } 2026-02-08 02:32:17.447056 | orchestrator | 2026-02-08 02:32:17.447067 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-08 02:32:17.447071 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-08 02:32:17.447075 | orchestrator | + admin_state_up = (known after apply) 2026-02-08 02:32:17.447079 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.447083 | orchestrator | + availability_zone_hints = [ 2026-02-08 02:32:17.447087 | orchestrator | + "nova", 2026-02-08 02:32:17.447091 | orchestrator | ] 2026-02-08 02:32:17.447095 | orchestrator | + distributed = (known after apply) 2026-02-08 02:32:17.447098 | orchestrator | + enable_snat = (known after apply) 2026-02-08 02:32:17.447102 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-08 02:32:17.447106 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-08 02:32:17.447110 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.447114 | orchestrator | + name = "testbed" 2026-02-08 02:32:17.447118 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.447122 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.447125 | orchestrator | 2026-02-08 02:32:17.447129 | orchestrator | + external_fixed_ip (known after apply) 2026-02-08 02:32:17.447133 | orchestrator | } 2026-02-08 02:32:17.447213 | orchestrator | 2026-02-08 02:32:17.447224 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-08 02:32:17.447229 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-08 02:32:17.447232 | orchestrator | + description = "ssh" 2026-02-08 02:32:17.447236 | orchestrator | + direction = "ingress" 2026-02-08 02:32:17.447240 | orchestrator | + ethertype = "IPv4" 2026-02-08 02:32:17.447244 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.447247 | orchestrator | + port_range_max = 22 2026-02-08 02:32:17.447251 | orchestrator | + port_range_min = 22 2026-02-08 02:32:17.447255 | orchestrator | + protocol = "tcp" 2026-02-08 02:32:17.447259 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.447267 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-08 02:32:17.447271 | orchestrator | + remote_group_id = (known after apply) 2026-02-08 02:32:17.447274 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-08 02:32:17.447304 | orchestrator | + security_group_id = (known after apply) 2026-02-08 02:32:17.447308 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.447312 | orchestrator | } 2026-02-08 02:32:17.447393 | orchestrator | 2026-02-08 02:32:17.447404 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-08 02:32:17.447408 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-08 02:32:17.447412 | orchestrator | + description = "wireguard" 2026-02-08 02:32:17.447416 | orchestrator | + direction = "ingress" 2026-02-08 02:32:17.447419 | orchestrator | + ethertype = "IPv4" 2026-02-08 02:32:17.447423 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.447427 | orchestrator | + port_range_max = 51820 2026-02-08 02:32:17.447431 | orchestrator | + port_range_min = 51820 2026-02-08 02:32:17.447435 | orchestrator | + protocol = "udp" 2026-02-08 02:32:17.447438 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.447442 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-08 02:32:17.447446 | orchestrator | + remote_group_id = (known after apply) 2026-02-08 02:32:17.447450 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-08 02:32:17.447454 | orchestrator | + security_group_id = (known after apply) 2026-02-08 02:32:17.447458 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.447461 | orchestrator | } 2026-02-08 02:32:17.447527 | orchestrator | 2026-02-08 02:32:17.447538 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-08 02:32:17.447543 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-08 02:32:17.447546 | orchestrator | + direction = "ingress" 2026-02-08 02:32:17.447550 | orchestrator | + ethertype = "IPv4" 2026-02-08 02:32:17.447554 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.447558 | orchestrator | + protocol = "tcp" 2026-02-08 02:32:17.447562 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.447565 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-08 02:32:17.447569 | orchestrator | + remote_group_id = (known after apply) 2026-02-08 02:32:17.447573 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-08 02:32:17.447577 | orchestrator | + security_group_id = (known after apply) 2026-02-08 02:32:17.447580 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.447584 | orchestrator | } 2026-02-08 02:32:17.447643 | orchestrator | 2026-02-08 02:32:17.447654 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-08 02:32:17.447658 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-08 02:32:17.447662 | orchestrator | + direction = "ingress" 2026-02-08 02:32:17.447666 | orchestrator | + ethertype = "IPv4" 2026-02-08 02:32:17.447670 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.447674 | orchestrator | + protocol = "udp" 2026-02-08 02:32:17.447677 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.447681 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-08 02:32:17.447685 | orchestrator | + remote_group_id = (known after apply) 2026-02-08 02:32:17.447689 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-08 02:32:17.447693 | orchestrator | + security_group_id = (known after apply) 2026-02-08 02:32:17.447696 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.447700 | orchestrator | } 2026-02-08 02:32:17.447759 | orchestrator | 2026-02-08 02:32:17.447770 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-08 02:32:17.447779 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-08 02:32:17.447783 | orchestrator | + direction = "ingress" 2026-02-08 02:32:17.447787 | orchestrator | + ethertype = "IPv4" 2026-02-08 02:32:17.447790 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.447794 | orchestrator | + protocol = "icmp" 2026-02-08 02:32:17.447798 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.447802 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-08 02:32:17.447805 | orchestrator | + remote_group_id = (known after apply) 2026-02-08 02:32:17.447809 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-08 02:32:17.447813 | orchestrator | + security_group_id = (known after apply) 2026-02-08 02:32:17.447817 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.447821 | orchestrator | } 2026-02-08 02:32:17.447885 | orchestrator | 2026-02-08 02:32:17.447896 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-08 02:32:17.447900 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-08 02:32:17.447904 | orchestrator | + direction = "ingress" 2026-02-08 02:32:17.447908 | orchestrator | + ethertype = "IPv4" 2026-02-08 02:32:17.447911 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.447915 | orchestrator | + protocol = "tcp" 2026-02-08 02:32:17.447919 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.447923 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-08 02:32:17.447930 | orchestrator | + remote_group_id = (known after apply) 2026-02-08 02:32:17.447934 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-08 02:32:17.447938 | orchestrator | + security_group_id = (known after apply) 2026-02-08 02:32:17.447941 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.447945 | orchestrator | } 2026-02-08 02:32:17.448003 | orchestrator | 2026-02-08 02:32:17.448013 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-08 02:32:17.448017 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-08 02:32:17.448021 | orchestrator | + direction = "ingress" 2026-02-08 02:32:17.448025 | orchestrator | + ethertype = "IPv4" 2026-02-08 02:32:17.448029 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.448033 | orchestrator | + protocol = "udp" 2026-02-08 02:32:17.448037 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.448040 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-08 02:32:17.448044 | orchestrator | + remote_group_id = (known after apply) 2026-02-08 02:32:17.448048 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-08 02:32:17.448052 | orchestrator | + security_group_id = (known after apply) 2026-02-08 02:32:17.448056 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.448060 | orchestrator | } 2026-02-08 02:32:17.448121 | orchestrator | 2026-02-08 02:32:17.448131 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-08 02:32:17.448136 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-08 02:32:17.448139 | orchestrator | + direction = "ingress" 2026-02-08 02:32:17.448146 | orchestrator | + ethertype = "IPv4" 2026-02-08 02:32:17.448150 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.448202 | orchestrator | + protocol = "icmp" 2026-02-08 02:32:17.448207 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.448211 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-08 02:32:17.448214 | orchestrator | + remote_group_id = (known after apply) 2026-02-08 02:32:17.448218 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-08 02:32:17.448222 | orchestrator | + security_group_id = (known after apply) 2026-02-08 02:32:17.448226 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.448236 | orchestrator | } 2026-02-08 02:32:17.448342 | orchestrator | 2026-02-08 02:32:17.448355 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-08 02:32:17.448359 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-08 02:32:17.448364 | orchestrator | + description = "vrrp" 2026-02-08 02:32:17.448368 | orchestrator | + direction = "ingress" 2026-02-08 02:32:17.448371 | orchestrator | + ethertype = "IPv4" 2026-02-08 02:32:17.448375 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.448379 | orchestrator | + protocol = "112" 2026-02-08 02:32:17.448383 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.448386 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-08 02:32:17.448390 | orchestrator | + remote_group_id = (known after apply) 2026-02-08 02:32:17.448394 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-08 02:32:17.448398 | orchestrator | + security_group_id = (known after apply) 2026-02-08 02:32:17.448401 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.448405 | orchestrator | } 2026-02-08 02:32:17.448463 | orchestrator | 2026-02-08 02:32:17.448479 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-08 02:32:17.448485 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-08 02:32:17.448491 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.448497 | orchestrator | + description = "management security group" 2026-02-08 02:32:17.448502 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.448507 | orchestrator | + name = "testbed-management" 2026-02-08 02:32:17.448513 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.448518 | orchestrator | + stateful = (known after apply) 2026-02-08 02:32:17.448524 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.448529 | orchestrator | } 2026-02-08 02:32:17.448603 | orchestrator | 2026-02-08 02:32:17.448621 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-08 02:32:17.448627 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-08 02:32:17.448633 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.448639 | orchestrator | + description = "node security group" 2026-02-08 02:32:17.448645 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.448652 | orchestrator | + name = "testbed-node" 2026-02-08 02:32:17.448658 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.448664 | orchestrator | + stateful = (known after apply) 2026-02-08 02:32:17.448670 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.448676 | orchestrator | } 2026-02-08 02:32:17.448810 | orchestrator | 2026-02-08 02:32:17.448822 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-08 02:32:17.448827 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-08 02:32:17.448831 | orchestrator | + all_tags = (known after apply) 2026-02-08 02:32:17.448834 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-08 02:32:17.448838 | orchestrator | + dns_nameservers = [ 2026-02-08 02:32:17.448842 | orchestrator | + "8.8.8.8", 2026-02-08 02:32:17.448846 | orchestrator | + "9.9.9.9", 2026-02-08 02:32:17.448850 | orchestrator | ] 2026-02-08 02:32:17.448854 | orchestrator | + enable_dhcp = true 2026-02-08 02:32:17.448858 | orchestrator | + gateway_ip = (known after apply) 2026-02-08 02:32:17.448861 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.448865 | orchestrator | + ip_version = 4 2026-02-08 02:32:17.448869 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-08 02:32:17.448873 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-08 02:32:17.448877 | orchestrator | + name = "subnet-testbed-management" 2026-02-08 02:32:17.448880 | orchestrator | + network_id = (known after apply) 2026-02-08 02:32:17.448884 | orchestrator | + no_gateway = false 2026-02-08 02:32:17.448888 | orchestrator | + region = (known after apply) 2026-02-08 02:32:17.448892 | orchestrator | + service_types = (known after apply) 2026-02-08 02:32:17.448913 | orchestrator | + tenant_id = (known after apply) 2026-02-08 02:32:17.448917 | orchestrator | 2026-02-08 02:32:17.448921 | orchestrator | + allocation_pool { 2026-02-08 02:32:17.448924 | orchestrator | + end = "192.168.31.250" 2026-02-08 02:32:17.448928 | orchestrator | + start = "192.168.31.200" 2026-02-08 02:32:17.448932 | orchestrator | } 2026-02-08 02:32:17.448936 | orchestrator | } 2026-02-08 02:32:17.448973 | orchestrator | 2026-02-08 02:32:17.448984 | orchestrator | # terraform_data.image will be created 2026-02-08 02:32:17.448989 | orchestrator | + resource "terraform_data" "image" { 2026-02-08 02:32:17.448992 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.448996 | orchestrator | + input = "Ubuntu 24.04" 2026-02-08 02:32:17.449000 | orchestrator | + output = (known after apply) 2026-02-08 02:32:17.449004 | orchestrator | } 2026-02-08 02:32:17.449037 | orchestrator | 2026-02-08 02:32:17.449047 | orchestrator | # terraform_data.image_node will be created 2026-02-08 02:32:17.449052 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-08 02:32:17.449056 | orchestrator | + id = (known after apply) 2026-02-08 02:32:17.449059 | orchestrator | + input = "Ubuntu 24.04" 2026-02-08 02:32:17.449063 | orchestrator | + output = (known after apply) 2026-02-08 02:32:17.449067 | orchestrator | } 2026-02-08 02:32:17.449082 | orchestrator | 2026-02-08 02:32:17.449087 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-08 02:32:17.449098 | orchestrator | 2026-02-08 02:32:17.449102 | orchestrator | Changes to Outputs: 2026-02-08 02:32:17.449113 | orchestrator | + manager_address = (sensitive value) 2026-02-08 02:32:17.449117 | orchestrator | + private_key = (sensitive value) 2026-02-08 02:32:17.665339 | orchestrator | terraform_data.image: Creating... 2026-02-08 02:32:17.665411 | orchestrator | terraform_data.image: Creation complete after 0s [id=f267ebba-341d-f70d-1869-ade3aa837e26] 2026-02-08 02:32:17.665755 | orchestrator | terraform_data.image_node: Creating... 2026-02-08 02:32:17.666537 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=23e6c063-ce20-563f-5690-04d3abbfe471] 2026-02-08 02:32:17.695343 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-08 02:32:17.695417 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-08 02:32:17.707119 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-08 02:32:17.708728 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-08 02:32:17.709757 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-08 02:32:17.711043 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-08 02:32:17.712058 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-08 02:32:17.712762 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-08 02:32:17.720072 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-08 02:32:17.721092 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-08 02:32:18.232194 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-08 02:32:18.235863 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-08 02:32:18.241009 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-08 02:32:18.243384 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-08 02:32:18.246948 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-08 02:32:18.247048 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-08 02:32:18.804666 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=34b1053d-bc5a-4cc9-98ee-3541a83d3ef7] 2026-02-08 02:32:18.827107 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-08 02:32:18.831524 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=e9bd2dd6b1e95bfc3ecfb5cfc394b5679e0bb4b2] 2026-02-08 02:32:18.843485 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-08 02:32:18.851584 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=308f02210b6483db9b0a6317d22b4a09ac4873fd] 2026-02-08 02:32:18.858869 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-08 02:32:21.314738 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=f936cccd-0c4c-4cd7-b507-1bacbfb024c1] 2026-02-08 02:32:21.326596 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-08 02:32:21.342860 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=33bf36ec-77e2-4563-8915-2d028f665133] 2026-02-08 02:32:21.348568 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-08 02:32:21.359169 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=f64e84f9-05a0-4abf-b38a-86e604a2541e] 2026-02-08 02:32:21.368735 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=1b3b2ead-9b22-4b4d-a30d-f81b3b57c055] 2026-02-08 02:32:21.369900 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-08 02:32:21.376998 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-08 02:32:21.377303 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=88e353e1-d5f5-455b-9174-972f0fde258a] 2026-02-08 02:32:21.384162 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-08 02:32:21.390591 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=2c937877-c8d8-449b-a5f6-0239aca924e2] 2026-02-08 02:32:21.395661 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-08 02:32:21.481864 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=380fccde-fc16-4afd-8581-e221e230c62f] 2026-02-08 02:32:21.489526 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-08 02:32:21.493512 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=fd096023-3e18-4205-a743-fc49c7d9ed02] 2026-02-08 02:32:21.503741 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=e630d271-3aac-4ce5-a41f-fdcd87f60fea] 2026-02-08 02:32:22.194229 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=7f0c6f27-797f-46da-82fd-067f06c1f72b] 2026-02-08 02:32:22.419780 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=c5036271-0413-47bd-9374-fd2039fbb781] 2026-02-08 02:32:22.431215 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-08 02:32:24.724249 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=8eb95c7e-79eb-481c-a9c3-b8351915337f] 2026-02-08 02:32:24.742833 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=8e0ebcee-3d0a-448e-8b07-4380ef670051] 2026-02-08 02:32:24.765347 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=6152c601-f22c-4ab1-825c-0b7a8c2f9bf8] 2026-02-08 02:32:24.777794 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=bd3944a6-94a2-4419-9995-37d6054a2669] 2026-02-08 02:32:24.802562 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=0b6d2541-fe07-44e8-aadf-a529695f9c1d] 2026-02-08 02:32:24.830383 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=3e566a5b-bb0c-4ece-9641-6f7efc673353] 2026-02-08 02:32:25.556869 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=8620ec56-5074-4cc4-9200-7e2c54e67021] 2026-02-08 02:32:25.564990 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-08 02:32:25.565103 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-08 02:32:25.565118 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-08 02:32:25.755155 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=9e252074-51db-46f6-b453-92897edf29fb] 2026-02-08 02:32:25.764470 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-08 02:32:25.766136 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-08 02:32:25.767048 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-08 02:32:25.768404 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-08 02:32:25.773186 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-08 02:32:25.775672 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-08 02:32:25.790702 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=90fc52fb-d674-4119-b185-441ec823c2a6] 2026-02-08 02:32:25.799792 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-08 02:32:25.799963 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-08 02:32:25.802166 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-08 02:32:26.013140 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=c60985de-07d3-4e16-bd53-606f5cf9d584] 2026-02-08 02:32:26.019561 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-08 02:32:26.141750 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=a3cec9d5-ec20-49a3-9fe3-4deec76eceaf] 2026-02-08 02:32:26.153904 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-08 02:32:26.557407 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=0d326b52-410c-4dad-b471-73f32f7ff302] 2026-02-08 02:32:26.559173 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=42476288-deff-4b04-8a7b-1ff0ba029ab1] 2026-02-08 02:32:26.567729 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-08 02:32:26.568805 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-08 02:32:26.709940 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=182fba1b-2868-4265-ab04-ecb6dd1ca874] 2026-02-08 02:32:26.725301 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-08 02:32:26.762882 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=3ec32f88-e3d6-4fea-9408-c59e802c8eac] 2026-02-08 02:32:26.781354 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-08 02:32:26.947126 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=23c587c6-9264-47f7-aa4b-2947871ce1ef] 2026-02-08 02:32:26.955842 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-08 02:32:27.138901 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=eb52e108-49ae-41f1-a2ab-619a36aaaca8] 2026-02-08 02:32:27.190212 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=acb48d26-1c91-4388-8286-0c157c11d4f5] 2026-02-08 02:32:27.453781 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=896138bb-184b-4f8e-8dd7-d6b2e33b1676] 2026-02-08 02:32:27.527810 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=78377196-be53-4ed6-8ce0-e05fe3d0ef20] 2026-02-08 02:32:27.530780 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=7b322e72-63b6-4d42-98cf-f26637b8bc02] 2026-02-08 02:32:27.549730 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=b942f283-b8c7-4cbc-8a2d-57b6228d0e91] 2026-02-08 02:32:27.710761 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=7584fcd7-1d04-4a73-ac6b-3021b0f4aa61] 2026-02-08 02:32:27.729115 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=c089bfed-18e5-4e2e-84c6-fada33c1c978] 2026-02-08 02:32:27.794951 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=b84af7e9-2215-446f-8321-e319eed38701] 2026-02-08 02:32:28.345400 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=c25770bd-7ffc-44f7-9a0f-1b7a4a0ebdfc] 2026-02-08 02:32:28.368184 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-08 02:32:28.385675 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-08 02:32:28.385790 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-08 02:32:28.401311 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-08 02:32:28.407532 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-08 02:32:28.410463 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-08 02:32:28.418083 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-08 02:32:29.896720 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=4d5bd910-a6c1-460c-8583-414fd3754774] 2026-02-08 02:32:29.901786 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-08 02:32:29.907785 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-08 02:32:29.912201 | orchestrator | local_file.inventory: Creating... 2026-02-08 02:32:29.916090 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=6e93dd4857f5b662d5a724ad408314451b4437ee] 2026-02-08 02:32:29.920417 | orchestrator | local_file.inventory: Creation complete after 0s [id=9f3b50392dde08d16d325a8c36907bb99fd3febe] 2026-02-08 02:32:30.615128 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=4d5bd910-a6c1-460c-8583-414fd3754774] 2026-02-08 02:32:38.388374 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-08 02:32:38.388490 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-08 02:32:38.414891 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-08 02:32:38.418372 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-08 02:32:38.424413 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-08 02:32:38.425575 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-08 02:32:48.388429 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-08 02:32:48.388554 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-08 02:32:48.415792 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-08 02:32:48.419089 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-08 02:32:48.425418 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-08 02:32:48.426538 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-08 02:32:48.756763 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=6d656909-77ed-4491-80fc-65964dab3473] 2026-02-08 02:32:48.904325 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=20ad265d-d4c4-4bc8-aa06-f36c3b8a1667] 2026-02-08 02:32:49.411691 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=2a590ba0-b67a-4d30-8bf2-93e9870e42fa] 2026-02-08 02:32:49.420155 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=102566c1-523b-4142-8f9c-8aa639536f05] 2026-02-08 02:32:58.397594 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-08 02:32:58.420016 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-08 02:32:58.921457 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=a14384d9-4e51-4896-a9f4-ae69a35cdfae] 2026-02-08 02:32:59.502582 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=f5ba6362-3ff1-4ee1-abcc-2335674fdd2f] 2026-02-08 02:32:59.532366 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-08 02:32:59.538118 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-08 02:32:59.549461 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8342016123721477574] 2026-02-08 02:32:59.552467 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-08 02:32:59.552851 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-08 02:32:59.553729 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-08 02:32:59.563439 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-08 02:32:59.573106 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-08 02:32:59.580484 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-08 02:32:59.585336 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-08 02:32:59.585376 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-08 02:32:59.611770 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-08 02:33:02.935520 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=6d656909-77ed-4491-80fc-65964dab3473/380fccde-fc16-4afd-8581-e221e230c62f] 2026-02-08 02:33:02.940887 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=2a590ba0-b67a-4d30-8bf2-93e9870e42fa/33bf36ec-77e2-4563-8915-2d028f665133] 2026-02-08 02:33:02.966269 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=20ad265d-d4c4-4bc8-aa06-f36c3b8a1667/1b3b2ead-9b22-4b4d-a30d-f81b3b57c055] 2026-02-08 02:33:02.994820 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=20ad265d-d4c4-4bc8-aa06-f36c3b8a1667/f64e84f9-05a0-4abf-b38a-86e604a2541e] 2026-02-08 02:33:02.997955 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=2a590ba0-b67a-4d30-8bf2-93e9870e42fa/e630d271-3aac-4ce5-a41f-fdcd87f60fea] 2026-02-08 02:33:03.030362 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=6d656909-77ed-4491-80fc-65964dab3473/88e353e1-d5f5-455b-9174-972f0fde258a] 2026-02-08 02:33:09.092430 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=6d656909-77ed-4491-80fc-65964dab3473/fd096023-3e18-4205-a743-fc49c7d9ed02] 2026-02-08 02:33:09.097931 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=2a590ba0-b67a-4d30-8bf2-93e9870e42fa/2c937877-c8d8-449b-a5f6-0239aca924e2] 2026-02-08 02:33:09.119741 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=20ad265d-d4c4-4bc8-aa06-f36c3b8a1667/f936cccd-0c4c-4cd7-b507-1bacbfb024c1] 2026-02-08 02:33:09.613623 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-08 02:33:19.613987 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-08 02:33:20.144769 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=d2dc029f-b8b5-4c89-9978-029a58074db3] 2026-02-08 02:33:20.162430 | orchestrator | 2026-02-08 02:33:20.162502 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-08 02:33:20.162560 | orchestrator | 2026-02-08 02:33:20.162580 | orchestrator | Outputs: 2026-02-08 02:33:20.162595 | orchestrator | 2026-02-08 02:33:20.162645 | orchestrator | manager_address = 2026-02-08 02:33:20.162663 | orchestrator | private_key = 2026-02-08 02:33:20.439441 | orchestrator | ok: Runtime: 0:01:08.865360 2026-02-08 02:33:20.477155 | 2026-02-08 02:33:20.477328 | TASK [Fetch manager address] 2026-02-08 02:33:20.932097 | orchestrator | ok 2026-02-08 02:33:20.943290 | 2026-02-08 02:33:20.943434 | TASK [Set manager_host address] 2026-02-08 02:33:21.022679 | orchestrator | ok 2026-02-08 02:33:21.031468 | 2026-02-08 02:33:21.031652 | LOOP [Update ansible collections] 2026-02-08 02:33:22.495929 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-08 02:33:22.496330 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-08 02:33:22.496400 | orchestrator | Starting galaxy collection install process 2026-02-08 02:33:22.496452 | orchestrator | Process install dependency map 2026-02-08 02:33:22.496525 | orchestrator | Starting collection install process 2026-02-08 02:33:22.496569 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-08 02:33:22.496618 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-08 02:33:22.496669 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-08 02:33:22.496769 | orchestrator | ok: Item: commons Runtime: 0:00:01.133190 2026-02-08 02:33:23.392900 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-08 02:33:23.393074 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-08 02:33:23.393127 | orchestrator | Starting galaxy collection install process 2026-02-08 02:33:23.393169 | orchestrator | Process install dependency map 2026-02-08 02:33:23.393207 | orchestrator | Starting collection install process 2026-02-08 02:33:23.393244 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-08 02:33:23.393279 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-08 02:33:23.393313 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-08 02:33:23.393365 | orchestrator | ok: Item: services Runtime: 0:00:00.625134 2026-02-08 02:33:23.406613 | 2026-02-08 02:33:23.406729 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-08 02:33:33.868008 | orchestrator | ok 2026-02-08 02:33:33.877167 | 2026-02-08 02:33:33.877276 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-08 02:34:33.927182 | orchestrator | ok 2026-02-08 02:34:33.937109 | 2026-02-08 02:34:33.937223 | TASK [Fetch manager ssh hostkey] 2026-02-08 02:34:35.509073 | orchestrator | Output suppressed because no_log was given 2026-02-08 02:34:35.524558 | 2026-02-08 02:34:35.524738 | TASK [Get ssh keypair from terraform environment] 2026-02-08 02:34:36.060799 | orchestrator | ok: Runtime: 0:00:00.008929 2026-02-08 02:34:36.078474 | 2026-02-08 02:34:36.078624 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-08 02:34:36.125308 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-08 02:34:36.135052 | 2026-02-08 02:34:36.135175 | TASK [Run manager part 0] 2026-02-08 02:34:37.039670 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-08 02:34:37.088364 | orchestrator | 2026-02-08 02:34:37.088409 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-08 02:34:37.088416 | orchestrator | 2026-02-08 02:34:37.088430 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-08 02:34:38.866906 | orchestrator | ok: [testbed-manager] 2026-02-08 02:34:38.866968 | orchestrator | 2026-02-08 02:34:38.866994 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-08 02:34:38.867004 | orchestrator | 2026-02-08 02:34:38.867014 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 02:34:40.747060 | orchestrator | ok: [testbed-manager] 2026-02-08 02:34:40.747102 | orchestrator | 2026-02-08 02:34:40.747110 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-08 02:34:41.432838 | orchestrator | ok: [testbed-manager] 2026-02-08 02:34:41.432930 | orchestrator | 2026-02-08 02:34:41.432948 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-08 02:34:41.496557 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:34:41.496617 | orchestrator | 2026-02-08 02:34:41.496627 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-08 02:34:41.528369 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:34:41.528415 | orchestrator | 2026-02-08 02:34:41.528422 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-08 02:34:41.554432 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:34:41.554482 | orchestrator | 2026-02-08 02:34:41.554488 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-08 02:34:41.577175 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:34:41.577216 | orchestrator | 2026-02-08 02:34:41.577221 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-08 02:34:41.601104 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:34:41.601153 | orchestrator | 2026-02-08 02:34:41.601160 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-08 02:34:41.641497 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:34:41.641552 | orchestrator | 2026-02-08 02:34:41.641560 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-08 02:34:41.674675 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:34:41.674724 | orchestrator | 2026-02-08 02:34:41.674731 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-08 02:34:42.407544 | orchestrator | changed: [testbed-manager] 2026-02-08 02:34:42.407604 | orchestrator | 2026-02-08 02:34:42.407611 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-08 02:37:20.572711 | orchestrator | changed: [testbed-manager] 2026-02-08 02:37:20.572786 | orchestrator | 2026-02-08 02:37:20.572805 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-08 02:39:02.672824 | orchestrator | changed: [testbed-manager] 2026-02-08 02:39:02.672925 | orchestrator | 2026-02-08 02:39:02.672945 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-08 02:39:33.046204 | orchestrator | changed: [testbed-manager] 2026-02-08 02:39:33.046393 | orchestrator | 2026-02-08 02:39:33.046430 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-08 02:39:43.060092 | orchestrator | changed: [testbed-manager] 2026-02-08 02:39:43.060210 | orchestrator | 2026-02-08 02:39:43.060237 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-08 02:39:43.112757 | orchestrator | ok: [testbed-manager] 2026-02-08 02:39:43.112842 | orchestrator | 2026-02-08 02:39:43.112857 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-08 02:39:43.963723 | orchestrator | ok: [testbed-manager] 2026-02-08 02:39:43.963956 | orchestrator | 2026-02-08 02:39:43.963987 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-08 02:39:44.736595 | orchestrator | changed: [testbed-manager] 2026-02-08 02:39:44.736688 | orchestrator | 2026-02-08 02:39:44.736706 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-08 02:39:51.038956 | orchestrator | changed: [testbed-manager] 2026-02-08 02:39:51.039047 | orchestrator | 2026-02-08 02:39:51.039085 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-08 02:39:57.132640 | orchestrator | changed: [testbed-manager] 2026-02-08 02:39:57.132724 | orchestrator | 2026-02-08 02:39:57.132740 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-08 02:39:59.902230 | orchestrator | changed: [testbed-manager] 2026-02-08 02:39:59.902299 | orchestrator | 2026-02-08 02:39:59.902339 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-08 02:40:01.790009 | orchestrator | changed: [testbed-manager] 2026-02-08 02:40:01.790161 | orchestrator | 2026-02-08 02:40:01.790179 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-08 02:40:02.868289 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-08 02:40:02.868421 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-08 02:40:02.868440 | orchestrator | 2026-02-08 02:40:02.868454 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-08 02:40:02.913594 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-08 02:40:02.913674 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-08 02:40:02.913689 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-08 02:40:02.913702 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-08 02:40:09.439038 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-08 02:40:09.439142 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-08 02:40:09.439158 | orchestrator | 2026-02-08 02:40:09.439171 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-08 02:40:10.026850 | orchestrator | changed: [testbed-manager] 2026-02-08 02:40:10.026964 | orchestrator | 2026-02-08 02:40:10.026983 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-08 02:43:29.316579 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-08 02:43:29.316636 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-08 02:43:29.316646 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-08 02:43:29.316653 | orchestrator | 2026-02-08 02:43:29.316659 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-08 02:43:31.877096 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-08 02:43:31.877180 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-08 02:43:31.877197 | orchestrator | 2026-02-08 02:43:31.877210 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-08 02:43:31.877223 | orchestrator | 2026-02-08 02:43:31.877237 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 02:43:33.325896 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:33.326006 | orchestrator | 2026-02-08 02:43:33.326091 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-08 02:43:33.382589 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:33.382821 | orchestrator | 2026-02-08 02:43:33.382840 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-08 02:43:33.456280 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:33.456427 | orchestrator | 2026-02-08 02:43:33.456455 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-08 02:43:34.296914 | orchestrator | changed: [testbed-manager] 2026-02-08 02:43:34.297034 | orchestrator | 2026-02-08 02:43:34.297065 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-08 02:43:35.064007 | orchestrator | changed: [testbed-manager] 2026-02-08 02:43:35.064049 | orchestrator | 2026-02-08 02:43:35.064059 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-08 02:43:36.477165 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-08 02:43:36.477220 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-08 02:43:36.477232 | orchestrator | 2026-02-08 02:43:36.477254 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-08 02:43:37.885100 | orchestrator | changed: [testbed-manager] 2026-02-08 02:43:37.885159 | orchestrator | 2026-02-08 02:43:37.885169 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-08 02:43:39.749950 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-08 02:43:39.750079 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-08 02:43:39.750098 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-08 02:43:39.750110 | orchestrator | 2026-02-08 02:43:39.750123 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-08 02:43:39.811362 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:43:39.811448 | orchestrator | 2026-02-08 02:43:39.811462 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-08 02:43:39.885171 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:43:39.885280 | orchestrator | 2026-02-08 02:43:39.885301 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-08 02:43:40.416467 | orchestrator | changed: [testbed-manager] 2026-02-08 02:43:40.416504 | orchestrator | 2026-02-08 02:43:40.416513 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-08 02:43:40.500193 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:43:40.500226 | orchestrator | 2026-02-08 02:43:40.500233 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-08 02:43:41.266821 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-08 02:43:41.266869 | orchestrator | changed: [testbed-manager] 2026-02-08 02:43:41.266883 | orchestrator | 2026-02-08 02:43:41.266894 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-08 02:43:41.304455 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:43:41.304500 | orchestrator | 2026-02-08 02:43:41.304513 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-08 02:43:41.341640 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:43:41.341677 | orchestrator | 2026-02-08 02:43:41.341717 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-08 02:43:41.376939 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:43:41.376970 | orchestrator | 2026-02-08 02:43:41.376979 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-08 02:43:41.448962 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:43:41.448995 | orchestrator | 2026-02-08 02:43:41.449004 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-08 02:43:42.147701 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:42.147733 | orchestrator | 2026-02-08 02:43:42.147741 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-08 02:43:42.147748 | orchestrator | 2026-02-08 02:43:42.147755 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 02:43:43.460767 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:43.460796 | orchestrator | 2026-02-08 02:43:43.460802 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-08 02:43:44.378483 | orchestrator | changed: [testbed-manager] 2026-02-08 02:43:44.378525 | orchestrator | 2026-02-08 02:43:44.378532 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 02:43:44.378539 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-08 02:43:44.378544 | orchestrator | 2026-02-08 02:43:44.555848 | orchestrator | ok: Runtime: 0:09:08.055048 2026-02-08 02:43:44.567410 | 2026-02-08 02:43:44.567525 | TASK [Point out that the log in on the manager is now possible] 2026-02-08 02:43:44.600396 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-08 02:43:44.607866 | 2026-02-08 02:43:44.607993 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-08 02:43:44.638142 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-08 02:43:44.644917 | 2026-02-08 02:43:44.645022 | TASK [Run manager part 1 + 2] 2026-02-08 02:43:45.552208 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-08 02:43:45.608490 | orchestrator | 2026-02-08 02:43:45.608543 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-08 02:43:45.608551 | orchestrator | 2026-02-08 02:43:45.608563 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 02:43:48.443365 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:48.443419 | orchestrator | 2026-02-08 02:43:48.443440 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-08 02:43:48.480806 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:43:48.480867 | orchestrator | 2026-02-08 02:43:48.480879 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-08 02:43:48.518570 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:48.518624 | orchestrator | 2026-02-08 02:43:48.518632 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-08 02:43:48.568700 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:48.568752 | orchestrator | 2026-02-08 02:43:48.568761 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-08 02:43:48.651106 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:48.651177 | orchestrator | 2026-02-08 02:43:48.651195 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-08 02:43:48.725831 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:48.725883 | orchestrator | 2026-02-08 02:43:48.725893 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-08 02:43:48.778161 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-08 02:43:48.778211 | orchestrator | 2026-02-08 02:43:48.778217 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-08 02:43:49.510884 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:49.510968 | orchestrator | 2026-02-08 02:43:49.510976 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-08 02:43:49.565052 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:43:49.565115 | orchestrator | 2026-02-08 02:43:49.565128 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-08 02:43:50.999561 | orchestrator | changed: [testbed-manager] 2026-02-08 02:43:50.999636 | orchestrator | 2026-02-08 02:43:50.999647 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-08 02:43:51.574754 | orchestrator | ok: [testbed-manager] 2026-02-08 02:43:51.574818 | orchestrator | 2026-02-08 02:43:51.574827 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-08 02:43:52.777037 | orchestrator | changed: [testbed-manager] 2026-02-08 02:43:52.777099 | orchestrator | 2026-02-08 02:43:52.777111 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-08 02:44:08.009915 | orchestrator | changed: [testbed-manager] 2026-02-08 02:44:08.009994 | orchestrator | 2026-02-08 02:44:08.010079 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-08 02:44:08.647957 | orchestrator | ok: [testbed-manager] 2026-02-08 02:44:08.648066 | orchestrator | 2026-02-08 02:44:08.648091 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-08 02:44:08.679572 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:44:08.679600 | orchestrator | 2026-02-08 02:44:08.679605 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-08 02:44:09.580166 | orchestrator | changed: [testbed-manager] 2026-02-08 02:44:09.580201 | orchestrator | 2026-02-08 02:44:09.580209 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-08 02:44:10.501740 | orchestrator | changed: [testbed-manager] 2026-02-08 02:44:10.501822 | orchestrator | 2026-02-08 02:44:10.501835 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-08 02:44:11.131648 | orchestrator | changed: [testbed-manager] 2026-02-08 02:44:11.131690 | orchestrator | 2026-02-08 02:44:11.131697 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-08 02:44:11.171104 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-08 02:44:11.171218 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-08 02:44:11.171233 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-08 02:44:11.171245 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-08 02:44:12.969009 | orchestrator | changed: [testbed-manager] 2026-02-08 02:44:12.969095 | orchestrator | 2026-02-08 02:44:12.969113 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-08 02:44:21.458257 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-08 02:44:21.458294 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-08 02:44:21.458303 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-08 02:44:21.458331 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-08 02:44:21.458341 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-08 02:44:21.458348 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-08 02:44:21.458354 | orchestrator | 2026-02-08 02:44:21.458361 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-08 02:44:22.514604 | orchestrator | changed: [testbed-manager] 2026-02-08 02:44:22.514749 | orchestrator | 2026-02-08 02:44:22.514780 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-08 02:44:22.553750 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:44:22.553829 | orchestrator | 2026-02-08 02:44:22.553841 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-08 02:44:25.562828 | orchestrator | changed: [testbed-manager] 2026-02-08 02:44:25.562931 | orchestrator | 2026-02-08 02:44:25.562949 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-08 02:44:25.604543 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:44:25.604636 | orchestrator | 2026-02-08 02:44:25.604653 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-08 02:46:11.728607 | orchestrator | changed: [testbed-manager] 2026-02-08 02:46:11.728721 | orchestrator | 2026-02-08 02:46:11.728737 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-08 02:46:13.030005 | orchestrator | ok: [testbed-manager] 2026-02-08 02:46:13.030129 | orchestrator | 2026-02-08 02:46:13.030148 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 02:46:13.030163 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-08 02:46:13.030175 | orchestrator | 2026-02-08 02:46:13.278605 | orchestrator | ok: Runtime: 0:02:28.155426 2026-02-08 02:46:13.295186 | 2026-02-08 02:46:13.295332 | TASK [Reboot manager] 2026-02-08 02:46:14.830763 | orchestrator | ok: Runtime: 0:00:01.012797 2026-02-08 02:46:14.846161 | 2026-02-08 02:46:14.846308 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-08 02:46:31.262915 | orchestrator | ok 2026-02-08 02:46:31.273798 | 2026-02-08 02:46:31.273937 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-08 02:47:31.318787 | orchestrator | ok 2026-02-08 02:47:31.328866 | 2026-02-08 02:47:31.328998 | TASK [Deploy manager + bootstrap nodes] 2026-02-08 02:47:34.043025 | orchestrator | 2026-02-08 02:47:34.043239 | orchestrator | # DEPLOY MANAGER 2026-02-08 02:47:34.043300 | orchestrator | 2026-02-08 02:47:34.043322 | orchestrator | + set -e 2026-02-08 02:47:34.043341 | orchestrator | + echo 2026-02-08 02:47:34.043360 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-08 02:47:34.043383 | orchestrator | + echo 2026-02-08 02:47:34.043437 | orchestrator | + cat /opt/manager-vars.sh 2026-02-08 02:47:34.046680 | orchestrator | export NUMBER_OF_NODES=6 2026-02-08 02:47:34.046775 | orchestrator | 2026-02-08 02:47:34.046792 | orchestrator | export CEPH_VERSION=reef 2026-02-08 02:47:34.046804 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-08 02:47:34.046814 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-08 02:47:34.046837 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-08 02:47:34.046846 | orchestrator | 2026-02-08 02:47:34.046860 | orchestrator | export ARA=false 2026-02-08 02:47:34.046869 | orchestrator | export DEPLOY_MODE=manager 2026-02-08 02:47:34.046882 | orchestrator | export TEMPEST=false 2026-02-08 02:47:34.046890 | orchestrator | export IS_ZUUL=true 2026-02-08 02:47:34.046898 | orchestrator | 2026-02-08 02:47:34.046912 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 02:47:34.046921 | orchestrator | export EXTERNAL_API=false 2026-02-08 02:47:34.046929 | orchestrator | 2026-02-08 02:47:34.046937 | orchestrator | export IMAGE_USER=ubuntu 2026-02-08 02:47:34.046948 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-08 02:47:34.046956 | orchestrator | 2026-02-08 02:47:34.046964 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-08 02:47:34.046980 | orchestrator | 2026-02-08 02:47:34.046988 | orchestrator | + echo 2026-02-08 02:47:34.046998 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 02:47:34.047566 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 02:47:34.047585 | orchestrator | ++ INTERACTIVE=false 2026-02-08 02:47:34.047594 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 02:47:34.047603 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 02:47:34.047666 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 02:47:34.047676 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 02:47:34.047684 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 02:47:34.047692 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 02:47:34.047699 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 02:47:34.047707 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 02:47:34.047716 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 02:47:34.047802 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 02:47:34.047813 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 02:47:34.047822 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 02:47:34.047839 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 02:47:34.047848 | orchestrator | ++ export ARA=false 2026-02-08 02:47:34.047856 | orchestrator | ++ ARA=false 2026-02-08 02:47:34.047864 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 02:47:34.047872 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 02:47:34.047880 | orchestrator | ++ export TEMPEST=false 2026-02-08 02:47:34.047893 | orchestrator | ++ TEMPEST=false 2026-02-08 02:47:34.047901 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 02:47:34.047909 | orchestrator | ++ IS_ZUUL=true 2026-02-08 02:47:34.047920 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 02:47:34.047930 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 02:47:34.047938 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 02:47:34.047946 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 02:47:34.047954 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 02:47:34.047962 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 02:47:34.047970 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 02:47:34.047978 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 02:47:34.047986 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 02:47:34.047994 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 02:47:34.048002 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-08 02:47:34.107935 | orchestrator | + docker version 2026-02-08 02:47:34.214930 | orchestrator | Client: Docker Engine - Community 2026-02-08 02:47:34.215031 | orchestrator | Version: 27.5.1 2026-02-08 02:47:34.215051 | orchestrator | API version: 1.47 2026-02-08 02:47:34.215064 | orchestrator | Go version: go1.22.11 2026-02-08 02:47:34.215078 | orchestrator | Git commit: 9f9e405 2026-02-08 02:47:34.215091 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-08 02:47:34.215106 | orchestrator | OS/Arch: linux/amd64 2026-02-08 02:47:34.215128 | orchestrator | Context: default 2026-02-08 02:47:34.215142 | orchestrator | 2026-02-08 02:47:34.215156 | orchestrator | Server: Docker Engine - Community 2026-02-08 02:47:34.215168 | orchestrator | Engine: 2026-02-08 02:47:34.215176 | orchestrator | Version: 27.5.1 2026-02-08 02:47:34.215185 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-08 02:47:34.215220 | orchestrator | Go version: go1.22.11 2026-02-08 02:47:34.215229 | orchestrator | Git commit: 4c9b3b0 2026-02-08 02:47:34.215237 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-08 02:47:34.215245 | orchestrator | OS/Arch: linux/amd64 2026-02-08 02:47:34.215252 | orchestrator | Experimental: false 2026-02-08 02:47:34.215260 | orchestrator | containerd: 2026-02-08 02:47:34.215304 | orchestrator | Version: v2.2.1 2026-02-08 02:47:34.215314 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-08 02:47:34.215324 | orchestrator | runc: 2026-02-08 02:47:34.215337 | orchestrator | Version: 1.3.4 2026-02-08 02:47:34.215350 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-08 02:47:34.215364 | orchestrator | docker-init: 2026-02-08 02:47:34.215376 | orchestrator | Version: 0.19.0 2026-02-08 02:47:34.215391 | orchestrator | GitCommit: de40ad0 2026-02-08 02:47:34.219698 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-08 02:47:34.231066 | orchestrator | + set -e 2026-02-08 02:47:34.231189 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 02:47:34.231411 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 02:47:34.231445 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 02:47:34.231466 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 02:47:34.231487 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 02:47:34.231507 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 02:47:34.231530 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 02:47:34.231550 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 02:47:34.231570 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 02:47:34.231590 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 02:47:34.231610 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 02:47:34.231631 | orchestrator | ++ export ARA=false 2026-02-08 02:47:34.231652 | orchestrator | ++ ARA=false 2026-02-08 02:47:34.231673 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 02:47:34.231693 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 02:47:34.231713 | orchestrator | ++ export TEMPEST=false 2026-02-08 02:47:34.231733 | orchestrator | ++ TEMPEST=false 2026-02-08 02:47:34.231753 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 02:47:34.231782 | orchestrator | ++ IS_ZUUL=true 2026-02-08 02:47:34.231801 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 02:47:34.231822 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 02:47:34.231842 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 02:47:34.231862 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 02:47:34.231881 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 02:47:34.231899 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 02:47:34.231917 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 02:47:34.231935 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 02:47:34.231953 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 02:47:34.231972 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 02:47:34.232006 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 02:47:34.232025 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 02:47:34.232043 | orchestrator | ++ INTERACTIVE=false 2026-02-08 02:47:34.232061 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 02:47:34.232084 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 02:47:34.232104 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-08 02:47:34.232125 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-08 02:47:34.237362 | orchestrator | + set -e 2026-02-08 02:47:34.237447 | orchestrator | + VERSION=9.5.0 2026-02-08 02:47:34.237468 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-08 02:47:34.245880 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-08 02:47:34.245966 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-08 02:47:34.251363 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-08 02:47:34.255444 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-08 02:47:34.265792 | orchestrator | /opt/configuration ~ 2026-02-08 02:47:34.265907 | orchestrator | + set -e 2026-02-08 02:47:34.265933 | orchestrator | + pushd /opt/configuration 2026-02-08 02:47:34.265955 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-08 02:47:34.267844 | orchestrator | + source /opt/venv/bin/activate 2026-02-08 02:47:34.269232 | orchestrator | ++ deactivate nondestructive 2026-02-08 02:47:34.269388 | orchestrator | ++ '[' -n '' ']' 2026-02-08 02:47:34.269417 | orchestrator | ++ '[' -n '' ']' 2026-02-08 02:47:34.269469 | orchestrator | ++ hash -r 2026-02-08 02:47:34.269488 | orchestrator | ++ '[' -n '' ']' 2026-02-08 02:47:34.269506 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-08 02:47:34.269522 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-08 02:47:34.269540 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-08 02:47:34.269556 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-08 02:47:34.269571 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-08 02:47:34.269585 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-08 02:47:34.269599 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-08 02:47:34.269615 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 02:47:34.269633 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 02:47:34.269649 | orchestrator | ++ export PATH 2026-02-08 02:47:34.269666 | orchestrator | ++ '[' -n '' ']' 2026-02-08 02:47:34.269695 | orchestrator | ++ '[' -z '' ']' 2026-02-08 02:47:34.269716 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-08 02:47:34.269733 | orchestrator | ++ PS1='(venv) ' 2026-02-08 02:47:34.269751 | orchestrator | ++ export PS1 2026-02-08 02:47:34.269769 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-08 02:47:34.269785 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-08 02:47:34.269803 | orchestrator | ++ hash -r 2026-02-08 02:47:34.269820 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-08 02:47:35.528655 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-08 02:47:35.529329 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-08 02:47:35.530741 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-08 02:47:35.532482 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-08 02:47:35.533815 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-08 02:47:35.544385 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-08 02:47:35.545631 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-08 02:47:35.546662 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-08 02:47:35.548333 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-08 02:47:35.583365 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-08 02:47:35.584583 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-08 02:47:35.586247 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-08 02:47:35.587783 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-08 02:47:35.591782 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-08 02:47:35.807949 | orchestrator | ++ which gilt 2026-02-08 02:47:35.811868 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-08 02:47:35.811937 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-08 02:47:36.042704 | orchestrator | osism.cfg-generics: 2026-02-08 02:47:36.168779 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-08 02:47:36.169911 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-08 02:47:36.171236 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-08 02:47:36.171348 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-08 02:47:36.791127 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-08 02:47:36.801020 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-08 02:47:37.164617 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-08 02:47:37.211819 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-08 02:47:37.211912 | orchestrator | + deactivate 2026-02-08 02:47:37.211928 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-08 02:47:37.211942 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 02:47:37.211953 | orchestrator | ~ 2026-02-08 02:47:37.211965 | orchestrator | + export PATH 2026-02-08 02:47:37.211976 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-08 02:47:37.211988 | orchestrator | + '[' -n '' ']' 2026-02-08 02:47:37.212002 | orchestrator | + hash -r 2026-02-08 02:47:37.212013 | orchestrator | + '[' -n '' ']' 2026-02-08 02:47:37.212024 | orchestrator | + unset VIRTUAL_ENV 2026-02-08 02:47:37.212034 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-08 02:47:37.212045 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-08 02:47:37.212057 | orchestrator | + unset -f deactivate 2026-02-08 02:47:37.212068 | orchestrator | + popd 2026-02-08 02:47:37.213178 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-08 02:47:37.213200 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-08 02:47:37.214147 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-08 02:47:37.279483 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 02:47:37.279583 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-08 02:47:37.280154 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-08 02:47:37.341324 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-08 02:47:37.341569 | orchestrator | ++ semver 2024.2 2025.1 2026-02-08 02:47:37.398890 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-08 02:47:37.398986 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-08 02:47:37.497222 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-08 02:47:37.497334 | orchestrator | + source /opt/venv/bin/activate 2026-02-08 02:47:37.497350 | orchestrator | ++ deactivate nondestructive 2026-02-08 02:47:37.497362 | orchestrator | ++ '[' -n '' ']' 2026-02-08 02:47:37.497374 | orchestrator | ++ '[' -n '' ']' 2026-02-08 02:47:37.497385 | orchestrator | ++ hash -r 2026-02-08 02:47:37.497396 | orchestrator | ++ '[' -n '' ']' 2026-02-08 02:47:37.497407 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-08 02:47:37.497418 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-08 02:47:37.497428 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-08 02:47:37.497440 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-08 02:47:37.497452 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-08 02:47:37.497464 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-08 02:47:37.497475 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-08 02:47:37.497486 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 02:47:37.497521 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 02:47:37.497534 | orchestrator | ++ export PATH 2026-02-08 02:47:37.497541 | orchestrator | ++ '[' -n '' ']' 2026-02-08 02:47:37.497547 | orchestrator | ++ '[' -z '' ']' 2026-02-08 02:47:37.497554 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-08 02:47:37.497560 | orchestrator | ++ PS1='(venv) ' 2026-02-08 02:47:37.497567 | orchestrator | ++ export PS1 2026-02-08 02:47:37.497573 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-08 02:47:37.497579 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-08 02:47:37.497586 | orchestrator | ++ hash -r 2026-02-08 02:47:37.497592 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-08 02:47:38.909677 | orchestrator | 2026-02-08 02:47:38.909785 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-08 02:47:38.909801 | orchestrator | 2026-02-08 02:47:38.909813 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-08 02:47:39.600211 | orchestrator | ok: [testbed-manager] 2026-02-08 02:47:39.600329 | orchestrator | 2026-02-08 02:47:39.600341 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-08 02:47:40.848767 | orchestrator | changed: [testbed-manager] 2026-02-08 02:47:40.848844 | orchestrator | 2026-02-08 02:47:40.848852 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-08 02:47:40.848878 | orchestrator | 2026-02-08 02:47:40.848884 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 02:47:43.385301 | orchestrator | ok: [testbed-manager] 2026-02-08 02:47:43.385390 | orchestrator | 2026-02-08 02:47:43.385401 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-08 02:47:43.438192 | orchestrator | ok: [testbed-manager] 2026-02-08 02:47:43.438326 | orchestrator | 2026-02-08 02:47:43.438346 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-08 02:47:43.918814 | orchestrator | changed: [testbed-manager] 2026-02-08 02:47:43.918969 | orchestrator | 2026-02-08 02:47:43.918991 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-08 02:47:43.963761 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:47:43.963862 | orchestrator | 2026-02-08 02:47:43.963876 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-08 02:47:44.341710 | orchestrator | changed: [testbed-manager] 2026-02-08 02:47:44.341810 | orchestrator | 2026-02-08 02:47:44.341825 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-08 02:47:44.830524 | orchestrator | ok: [testbed-manager] 2026-02-08 02:47:44.830610 | orchestrator | 2026-02-08 02:47:44.830621 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-08 02:47:44.962262 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:47:44.962413 | orchestrator | 2026-02-08 02:47:44.962440 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-08 02:47:44.962459 | orchestrator | 2026-02-08 02:47:44.962479 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 02:47:46.817964 | orchestrator | ok: [testbed-manager] 2026-02-08 02:47:46.818081 | orchestrator | 2026-02-08 02:47:46.818093 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-08 02:47:46.935594 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-08 02:47:46.935687 | orchestrator | 2026-02-08 02:47:46.935698 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-08 02:47:47.001733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-08 02:47:47.001829 | orchestrator | 2026-02-08 02:47:47.001839 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-08 02:47:48.138400 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-08 02:47:48.138521 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-08 02:47:48.138544 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-08 02:47:48.138560 | orchestrator | 2026-02-08 02:47:48.138580 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-08 02:47:50.032477 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-08 02:47:50.032595 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-08 02:47:50.032610 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-08 02:47:50.032624 | orchestrator | 2026-02-08 02:47:50.032637 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-08 02:47:50.691408 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-08 02:47:50.691512 | orchestrator | changed: [testbed-manager] 2026-02-08 02:47:50.691530 | orchestrator | 2026-02-08 02:47:50.691544 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-08 02:47:51.337652 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-08 02:47:51.337759 | orchestrator | changed: [testbed-manager] 2026-02-08 02:47:51.337777 | orchestrator | 2026-02-08 02:47:51.337790 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-08 02:47:51.401584 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:47:51.401704 | orchestrator | 2026-02-08 02:47:51.401845 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-08 02:47:51.771258 | orchestrator | ok: [testbed-manager] 2026-02-08 02:47:51.771433 | orchestrator | 2026-02-08 02:47:51.771449 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-08 02:47:51.838385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-08 02:47:51.838479 | orchestrator | 2026-02-08 02:47:51.838503 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-08 02:47:52.962674 | orchestrator | changed: [testbed-manager] 2026-02-08 02:47:52.962781 | orchestrator | 2026-02-08 02:47:52.962798 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-08 02:47:53.812233 | orchestrator | changed: [testbed-manager] 2026-02-08 02:47:53.812392 | orchestrator | 2026-02-08 02:47:53.812410 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-08 02:48:11.052674 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:11.052759 | orchestrator | 2026-02-08 02:48:11.052771 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-08 02:48:11.107239 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:48:11.107397 | orchestrator | 2026-02-08 02:48:11.107442 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-08 02:48:11.107456 | orchestrator | 2026-02-08 02:48:11.107468 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 02:48:12.986631 | orchestrator | ok: [testbed-manager] 2026-02-08 02:48:12.986737 | orchestrator | 2026-02-08 02:48:12.986754 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-08 02:48:13.094333 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-08 02:48:13.094432 | orchestrator | 2026-02-08 02:48:13.094447 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-08 02:48:13.162077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 02:48:13.162169 | orchestrator | 2026-02-08 02:48:13.162183 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-08 02:48:15.917152 | orchestrator | ok: [testbed-manager] 2026-02-08 02:48:15.917251 | orchestrator | 2026-02-08 02:48:15.917282 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-08 02:48:15.967201 | orchestrator | ok: [testbed-manager] 2026-02-08 02:48:15.967348 | orchestrator | 2026-02-08 02:48:15.967366 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-08 02:48:16.118698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-08 02:48:16.118839 | orchestrator | 2026-02-08 02:48:16.118857 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-08 02:48:19.128345 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-08 02:48:19.128448 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-08 02:48:19.128465 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-08 02:48:19.128477 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-08 02:48:19.128488 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-08 02:48:19.128500 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-08 02:48:19.128510 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-08 02:48:19.128522 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-08 02:48:19.128533 | orchestrator | 2026-02-08 02:48:19.128552 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-08 02:48:19.798117 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:19.798207 | orchestrator | 2026-02-08 02:48:19.798221 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-08 02:48:20.446560 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:20.446682 | orchestrator | 2026-02-08 02:48:20.446702 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-08 02:48:20.515482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-08 02:48:20.515587 | orchestrator | 2026-02-08 02:48:20.515605 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-08 02:48:21.776871 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-08 02:48:21.776982 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-08 02:48:21.776999 | orchestrator | 2026-02-08 02:48:21.777012 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-08 02:48:22.586306 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:22.586413 | orchestrator | 2026-02-08 02:48:22.586432 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-08 02:48:22.639053 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:48:22.639183 | orchestrator | 2026-02-08 02:48:22.639212 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-08 02:48:22.723748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-08 02:48:22.723854 | orchestrator | 2026-02-08 02:48:22.723871 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-08 02:48:23.370520 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:23.370623 | orchestrator | 2026-02-08 02:48:23.370641 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-08 02:48:23.434218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-08 02:48:23.434343 | orchestrator | 2026-02-08 02:48:23.434359 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-08 02:48:24.848776 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-08 02:48:24.848870 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-08 02:48:24.848883 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:24.848893 | orchestrator | 2026-02-08 02:48:24.848902 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-08 02:48:25.486319 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:25.486423 | orchestrator | 2026-02-08 02:48:25.486441 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-08 02:48:25.526380 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:48:25.526515 | orchestrator | 2026-02-08 02:48:25.526543 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-08 02:48:25.642649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-08 02:48:25.642747 | orchestrator | 2026-02-08 02:48:25.642764 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-08 02:48:26.162189 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:26.162337 | orchestrator | 2026-02-08 02:48:26.162356 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-08 02:48:26.548051 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:26.548156 | orchestrator | 2026-02-08 02:48:26.548170 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-08 02:48:27.781040 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-08 02:48:27.781143 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-08 02:48:27.781157 | orchestrator | 2026-02-08 02:48:27.781169 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-08 02:48:28.432766 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:28.432853 | orchestrator | 2026-02-08 02:48:28.432864 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-08 02:48:28.799546 | orchestrator | ok: [testbed-manager] 2026-02-08 02:48:28.799689 | orchestrator | 2026-02-08 02:48:28.799717 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-08 02:48:29.173312 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:29.173418 | orchestrator | 2026-02-08 02:48:29.173435 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-08 02:48:29.225629 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:48:29.225705 | orchestrator | 2026-02-08 02:48:29.225715 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-08 02:48:29.298953 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-08 02:48:29.299082 | orchestrator | 2026-02-08 02:48:29.299098 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-08 02:48:29.353853 | orchestrator | ok: [testbed-manager] 2026-02-08 02:48:29.353975 | orchestrator | 2026-02-08 02:48:29.354001 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-08 02:48:31.424925 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-08 02:48:31.425021 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-08 02:48:31.425032 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-08 02:48:31.425040 | orchestrator | 2026-02-08 02:48:31.425048 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-08 02:48:32.155028 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:32.155128 | orchestrator | 2026-02-08 02:48:32.155145 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-08 02:48:32.896692 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:32.896782 | orchestrator | 2026-02-08 02:48:32.896789 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-08 02:48:33.633053 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:33.633143 | orchestrator | 2026-02-08 02:48:33.633156 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-08 02:48:33.729960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-08 02:48:33.730106 | orchestrator | 2026-02-08 02:48:33.730130 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-08 02:48:33.779756 | orchestrator | ok: [testbed-manager] 2026-02-08 02:48:33.779849 | orchestrator | 2026-02-08 02:48:33.779866 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-08 02:48:34.559942 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-08 02:48:34.560032 | orchestrator | 2026-02-08 02:48:34.560045 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-08 02:48:34.651116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-08 02:48:34.651210 | orchestrator | 2026-02-08 02:48:34.651225 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-08 02:48:35.413419 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:35.413492 | orchestrator | 2026-02-08 02:48:35.413499 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-08 02:48:36.013809 | orchestrator | ok: [testbed-manager] 2026-02-08 02:48:36.013911 | orchestrator | 2026-02-08 02:48:36.013926 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-08 02:48:36.065302 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:48:36.065377 | orchestrator | 2026-02-08 02:48:36.065388 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-08 02:48:36.127218 | orchestrator | ok: [testbed-manager] 2026-02-08 02:48:36.127373 | orchestrator | 2026-02-08 02:48:36.127399 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-08 02:48:36.990450 | orchestrator | changed: [testbed-manager] 2026-02-08 02:48:36.990560 | orchestrator | 2026-02-08 02:48:36.990577 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-08 02:49:48.373578 | orchestrator | changed: [testbed-manager] 2026-02-08 02:49:48.373677 | orchestrator | 2026-02-08 02:49:48.373687 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-08 02:49:49.408374 | orchestrator | ok: [testbed-manager] 2026-02-08 02:49:49.408512 | orchestrator | 2026-02-08 02:49:49.408530 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-08 02:49:49.463461 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:49:49.463597 | orchestrator | 2026-02-08 02:49:49.463628 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-08 02:49:52.643480 | orchestrator | changed: [testbed-manager] 2026-02-08 02:49:52.643585 | orchestrator | 2026-02-08 02:49:52.643602 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-08 02:49:52.691840 | orchestrator | ok: [testbed-manager] 2026-02-08 02:49:52.691962 | orchestrator | 2026-02-08 02:49:52.691987 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-08 02:49:52.692007 | orchestrator | 2026-02-08 02:49:52.692026 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-08 02:49:52.845301 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:49:52.845425 | orchestrator | 2026-02-08 02:49:52.845451 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-08 02:50:52.908034 | orchestrator | Pausing for 60 seconds 2026-02-08 02:50:52.908155 | orchestrator | changed: [testbed-manager] 2026-02-08 02:50:52.908173 | orchestrator | 2026-02-08 02:50:52.908187 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-08 02:50:56.089732 | orchestrator | changed: [testbed-manager] 2026-02-08 02:50:56.089836 | orchestrator | 2026-02-08 02:50:56.089864 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-08 02:51:58.054307 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-08 02:51:58.054422 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-08 02:51:58.054459 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-08 02:51:58.054473 | orchestrator | changed: [testbed-manager] 2026-02-08 02:51:58.054486 | orchestrator | 2026-02-08 02:51:58.054498 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-08 02:52:09.371406 | orchestrator | changed: [testbed-manager] 2026-02-08 02:52:09.371551 | orchestrator | 2026-02-08 02:52:09.371572 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-08 02:52:09.450890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-08 02:52:09.450988 | orchestrator | 2026-02-08 02:52:09.451004 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-08 02:52:09.451025 | orchestrator | 2026-02-08 02:52:09.451043 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-08 02:52:09.511184 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:52:09.511308 | orchestrator | 2026-02-08 02:52:09.511322 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-08 02:52:09.593485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-08 02:52:09.593592 | orchestrator | 2026-02-08 02:52:09.593600 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-08 02:52:10.397686 | orchestrator | changed: [testbed-manager] 2026-02-08 02:52:10.397767 | orchestrator | 2026-02-08 02:52:10.397778 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-08 02:52:13.515142 | orchestrator | ok: [testbed-manager] 2026-02-08 02:52:13.515271 | orchestrator | 2026-02-08 02:52:13.515289 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-08 02:52:13.584908 | orchestrator | ok: [testbed-manager] => { 2026-02-08 02:52:13.584981 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-08 02:52:13.584990 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-08 02:52:13.584997 | orchestrator | "Checking running containers against expected versions...", 2026-02-08 02:52:13.585004 | orchestrator | "", 2026-02-08 02:52:13.585011 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-08 02:52:13.585017 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-08 02:52:13.585024 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585031 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-08 02:52:13.585037 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585043 | orchestrator | "", 2026-02-08 02:52:13.585049 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-08 02:52:13.585077 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-08 02:52:13.585087 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585097 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-08 02:52:13.585105 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585115 | orchestrator | "", 2026-02-08 02:52:13.585124 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-08 02:52:13.585135 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-08 02:52:13.585145 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585154 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-08 02:52:13.585161 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585166 | orchestrator | "", 2026-02-08 02:52:13.585172 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-08 02:52:13.585178 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-08 02:52:13.585184 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585190 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-08 02:52:13.585196 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585202 | orchestrator | "", 2026-02-08 02:52:13.585210 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-08 02:52:13.585215 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-08 02:52:13.585272 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585279 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-08 02:52:13.585285 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585290 | orchestrator | "", 2026-02-08 02:52:13.585296 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-08 02:52:13.585302 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585308 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585314 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585319 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585325 | orchestrator | "", 2026-02-08 02:52:13.585331 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-08 02:52:13.585337 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-08 02:52:13.585342 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585348 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-08 02:52:13.585354 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585363 | orchestrator | "", 2026-02-08 02:52:13.585372 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-08 02:52:13.585381 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-08 02:52:13.585391 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585400 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-08 02:52:13.585409 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585418 | orchestrator | "", 2026-02-08 02:52:13.585428 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-08 02:52:13.585438 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-08 02:52:13.585447 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585457 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-08 02:52:13.585467 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585477 | orchestrator | "", 2026-02-08 02:52:13.585485 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-08 02:52:13.585492 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-08 02:52:13.585499 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585505 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-08 02:52:13.585512 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585519 | orchestrator | "", 2026-02-08 02:52:13.585529 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-08 02:52:13.585549 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585560 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585572 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585582 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585592 | orchestrator | "", 2026-02-08 02:52:13.585603 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-08 02:52:13.585613 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585622 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585631 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585642 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585661 | orchestrator | "", 2026-02-08 02:52:13.585668 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-08 02:52:13.585675 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585681 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585688 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585695 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585701 | orchestrator | "", 2026-02-08 02:52:13.585708 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-08 02:52:13.585714 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585721 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585728 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585749 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585756 | orchestrator | "", 2026-02-08 02:52:13.585763 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-08 02:52:13.585770 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585784 | orchestrator | " Enabled: true", 2026-02-08 02:52:13.585791 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-08 02:52:13.585798 | orchestrator | " Status: ✅ MATCH", 2026-02-08 02:52:13.585805 | orchestrator | "", 2026-02-08 02:52:13.585811 | orchestrator | "=== Summary ===", 2026-02-08 02:52:13.585818 | orchestrator | "Errors (version mismatches): 0", 2026-02-08 02:52:13.585825 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-08 02:52:13.585832 | orchestrator | "", 2026-02-08 02:52:13.585839 | orchestrator | "✅ All running containers match expected versions!" 2026-02-08 02:52:13.585846 | orchestrator | ] 2026-02-08 02:52:13.585853 | orchestrator | } 2026-02-08 02:52:13.585860 | orchestrator | 2026-02-08 02:52:13.585867 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-08 02:52:13.641847 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:52:13.641925 | orchestrator | 2026-02-08 02:52:13.641933 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 02:52:13.641938 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-08 02:52:13.641943 | orchestrator | 2026-02-08 02:52:13.786102 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-08 02:52:13.786199 | orchestrator | + deactivate 2026-02-08 02:52:13.786216 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-08 02:52:13.786270 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 02:52:13.786278 | orchestrator | + export PATH 2026-02-08 02:52:13.786285 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-08 02:52:13.786292 | orchestrator | + '[' -n '' ']' 2026-02-08 02:52:13.786303 | orchestrator | + hash -r 2026-02-08 02:52:13.786313 | orchestrator | + '[' -n '' ']' 2026-02-08 02:52:13.786323 | orchestrator | + unset VIRTUAL_ENV 2026-02-08 02:52:13.786333 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-08 02:52:13.786342 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-08 02:52:13.786353 | orchestrator | + unset -f deactivate 2026-02-08 02:52:13.786365 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-08 02:52:13.793515 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-08 02:52:13.793588 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-08 02:52:13.793617 | orchestrator | + local max_attempts=60 2026-02-08 02:52:13.793624 | orchestrator | + local name=ceph-ansible 2026-02-08 02:52:13.793630 | orchestrator | + local attempt_num=1 2026-02-08 02:52:13.795050 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 02:52:13.828875 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-08 02:52:13.828971 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-08 02:52:13.828987 | orchestrator | + local max_attempts=60 2026-02-08 02:52:13.829001 | orchestrator | + local name=kolla-ansible 2026-02-08 02:52:13.829015 | orchestrator | + local attempt_num=1 2026-02-08 02:52:13.829028 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-08 02:52:13.865012 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-08 02:52:13.865132 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-08 02:52:13.865145 | orchestrator | + local max_attempts=60 2026-02-08 02:52:13.865152 | orchestrator | + local name=osism-ansible 2026-02-08 02:52:13.865159 | orchestrator | + local attempt_num=1 2026-02-08 02:52:13.865748 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-08 02:52:13.895041 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-08 02:52:13.895141 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-08 02:52:13.895153 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-08 02:52:14.581679 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-08 02:52:14.739142 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-08 02:52:14.739244 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-08 02:52:14.739254 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-08 02:52:14.739259 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-08 02:52:14.739266 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-02-08 02:52:14.739286 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-08 02:52:14.739290 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-08 02:52:14.739294 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-08 02:52:14.739298 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-08 02:52:14.739302 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-08 02:52:14.739306 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-08 02:52:14.739310 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-08 02:52:14.739314 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-08 02:52:14.739346 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-08 02:52:14.739351 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-08 02:52:14.739355 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-08 02:52:14.744488 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-08 02:52:14.782166 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 02:52:14.782264 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-08 02:52:14.784883 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-08 02:52:27.210568 | orchestrator | 2026-02-08 02:52:27 | INFO  | Task ba2f6636-937f-4898-9cb3-b0a00935d8f0 (resolvconf) was prepared for execution. 2026-02-08 02:52:27.210678 | orchestrator | 2026-02-08 02:52:27 | INFO  | It takes a moment until task ba2f6636-937f-4898-9cb3-b0a00935d8f0 (resolvconf) has been started and output is visible here. 2026-02-08 02:52:42.669026 | orchestrator | 2026-02-08 02:52:42.669143 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-08 02:52:42.669161 | orchestrator | 2026-02-08 02:52:42.669173 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 02:52:42.669185 | orchestrator | Sunday 08 February 2026 02:52:31 +0000 (0:00:00.141) 0:00:00.142 ******* 2026-02-08 02:52:42.669196 | orchestrator | ok: [testbed-manager] 2026-02-08 02:52:42.669208 | orchestrator | 2026-02-08 02:52:42.669280 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-08 02:52:42.669293 | orchestrator | Sunday 08 February 2026 02:52:36 +0000 (0:00:04.812) 0:00:04.954 ******* 2026-02-08 02:52:42.669304 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:52:42.669317 | orchestrator | 2026-02-08 02:52:42.669328 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-08 02:52:42.669339 | orchestrator | Sunday 08 February 2026 02:52:36 +0000 (0:00:00.073) 0:00:05.028 ******* 2026-02-08 02:52:42.669350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-08 02:52:42.669363 | orchestrator | 2026-02-08 02:52:42.669374 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-08 02:52:42.669385 | orchestrator | Sunday 08 February 2026 02:52:36 +0000 (0:00:00.086) 0:00:05.114 ******* 2026-02-08 02:52:42.669417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 02:52:42.669429 | orchestrator | 2026-02-08 02:52:42.669440 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-08 02:52:42.669451 | orchestrator | Sunday 08 February 2026 02:52:36 +0000 (0:00:00.090) 0:00:05.205 ******* 2026-02-08 02:52:42.669462 | orchestrator | ok: [testbed-manager] 2026-02-08 02:52:42.669473 | orchestrator | 2026-02-08 02:52:42.669484 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-08 02:52:42.669495 | orchestrator | Sunday 08 February 2026 02:52:37 +0000 (0:00:01.158) 0:00:06.363 ******* 2026-02-08 02:52:42.669506 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:52:42.669517 | orchestrator | 2026-02-08 02:52:42.669528 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-08 02:52:42.669597 | orchestrator | Sunday 08 February 2026 02:52:37 +0000 (0:00:00.070) 0:00:06.434 ******* 2026-02-08 02:52:42.669636 | orchestrator | ok: [testbed-manager] 2026-02-08 02:52:42.669649 | orchestrator | 2026-02-08 02:52:42.669663 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-08 02:52:42.669676 | orchestrator | Sunday 08 February 2026 02:52:38 +0000 (0:00:00.527) 0:00:06.962 ******* 2026-02-08 02:52:42.669689 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:52:42.669702 | orchestrator | 2026-02-08 02:52:42.669715 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-08 02:52:42.669728 | orchestrator | Sunday 08 February 2026 02:52:38 +0000 (0:00:00.082) 0:00:07.044 ******* 2026-02-08 02:52:42.669741 | orchestrator | changed: [testbed-manager] 2026-02-08 02:52:42.669754 | orchestrator | 2026-02-08 02:52:42.669767 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-08 02:52:42.669780 | orchestrator | Sunday 08 February 2026 02:52:39 +0000 (0:00:00.541) 0:00:07.586 ******* 2026-02-08 02:52:42.669792 | orchestrator | changed: [testbed-manager] 2026-02-08 02:52:42.669805 | orchestrator | 2026-02-08 02:52:42.669817 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-08 02:52:42.669830 | orchestrator | Sunday 08 February 2026 02:52:40 +0000 (0:00:01.105) 0:00:08.692 ******* 2026-02-08 02:52:42.669842 | orchestrator | ok: [testbed-manager] 2026-02-08 02:52:42.669854 | orchestrator | 2026-02-08 02:52:42.669864 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-08 02:52:42.669875 | orchestrator | Sunday 08 February 2026 02:52:41 +0000 (0:00:01.027) 0:00:09.719 ******* 2026-02-08 02:52:42.669887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-08 02:52:42.669898 | orchestrator | 2026-02-08 02:52:42.669909 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-08 02:52:42.669920 | orchestrator | Sunday 08 February 2026 02:52:41 +0000 (0:00:00.085) 0:00:09.805 ******* 2026-02-08 02:52:42.669931 | orchestrator | changed: [testbed-manager] 2026-02-08 02:52:42.669942 | orchestrator | 2026-02-08 02:52:42.669952 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 02:52:42.669964 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 02:52:42.669975 | orchestrator | 2026-02-08 02:52:42.669986 | orchestrator | 2026-02-08 02:52:42.669997 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 02:52:42.670008 | orchestrator | Sunday 08 February 2026 02:52:42 +0000 (0:00:01.195) 0:00:11.001 ******* 2026-02-08 02:52:42.670076 | orchestrator | =============================================================================== 2026-02-08 02:52:42.670089 | orchestrator | Gathering Facts --------------------------------------------------------- 4.81s 2026-02-08 02:52:42.670099 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.20s 2026-02-08 02:52:42.670110 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.16s 2026-02-08 02:52:42.670121 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2026-02-08 02:52:42.670132 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.03s 2026-02-08 02:52:42.670143 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.54s 2026-02-08 02:52:42.670174 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-02-08 02:52:42.670186 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-02-08 02:52:42.670197 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-08 02:52:42.670208 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-02-08 02:52:42.670240 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-08 02:52:42.670252 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-08 02:52:42.670273 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-08 02:52:42.972437 | orchestrator | + osism apply sshconfig 2026-02-08 02:52:55.081023 | orchestrator | 2026-02-08 02:52:55 | INFO  | Task f4e8578c-cf4d-4ef2-97a5-5074eb39498d (sshconfig) was prepared for execution. 2026-02-08 02:52:55.081135 | orchestrator | 2026-02-08 02:52:55 | INFO  | It takes a moment until task f4e8578c-cf4d-4ef2-97a5-5074eb39498d (sshconfig) has been started and output is visible here. 2026-02-08 02:53:07.383956 | orchestrator | 2026-02-08 02:53:07.384036 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-08 02:53:07.384044 | orchestrator | 2026-02-08 02:53:07.384064 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-08 02:53:07.384069 | orchestrator | Sunday 08 February 2026 02:52:59 +0000 (0:00:00.168) 0:00:00.168 ******* 2026-02-08 02:53:07.384074 | orchestrator | ok: [testbed-manager] 2026-02-08 02:53:07.384080 | orchestrator | 2026-02-08 02:53:07.384085 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-08 02:53:07.384090 | orchestrator | Sunday 08 February 2026 02:53:00 +0000 (0:00:00.597) 0:00:00.766 ******* 2026-02-08 02:53:07.384095 | orchestrator | changed: [testbed-manager] 2026-02-08 02:53:07.384101 | orchestrator | 2026-02-08 02:53:07.384105 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-08 02:53:07.384110 | orchestrator | Sunday 08 February 2026 02:53:00 +0000 (0:00:00.570) 0:00:01.336 ******* 2026-02-08 02:53:07.384114 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-08 02:53:07.384119 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-08 02:53:07.384124 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-08 02:53:07.384128 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-08 02:53:07.384132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-08 02:53:07.384137 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-08 02:53:07.384141 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-08 02:53:07.384146 | orchestrator | 2026-02-08 02:53:07.384153 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-08 02:53:07.384160 | orchestrator | Sunday 08 February 2026 02:53:06 +0000 (0:00:05.846) 0:00:07.182 ******* 2026-02-08 02:53:07.384167 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:53:07.384174 | orchestrator | 2026-02-08 02:53:07.384182 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-08 02:53:07.384189 | orchestrator | Sunday 08 February 2026 02:53:06 +0000 (0:00:00.080) 0:00:07.263 ******* 2026-02-08 02:53:07.384196 | orchestrator | changed: [testbed-manager] 2026-02-08 02:53:07.384203 | orchestrator | 2026-02-08 02:53:07.384210 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 02:53:07.384289 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 02:53:07.384297 | orchestrator | 2026-02-08 02:53:07.384305 | orchestrator | 2026-02-08 02:53:07.384312 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 02:53:07.384319 | orchestrator | Sunday 08 February 2026 02:53:07 +0000 (0:00:00.554) 0:00:07.818 ******* 2026-02-08 02:53:07.384326 | orchestrator | =============================================================================== 2026-02-08 02:53:07.384332 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.85s 2026-02-08 02:53:07.384339 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2026-02-08 02:53:07.384346 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.57s 2026-02-08 02:53:07.384353 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-02-08 02:53:07.384361 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-02-08 02:53:07.706603 | orchestrator | + osism apply known-hosts 2026-02-08 02:53:19.802302 | orchestrator | 2026-02-08 02:53:19 | INFO  | Task 678d9bd1-a314-4f32-b270-036980425606 (known-hosts) was prepared for execution. 2026-02-08 02:53:19.802440 | orchestrator | 2026-02-08 02:53:19 | INFO  | It takes a moment until task 678d9bd1-a314-4f32-b270-036980425606 (known-hosts) has been started and output is visible here. 2026-02-08 02:53:36.595978 | orchestrator | 2026-02-08 02:53:36.596094 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-08 02:53:36.596111 | orchestrator | 2026-02-08 02:53:36.596123 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-08 02:53:36.596135 | orchestrator | Sunday 08 February 2026 02:53:24 +0000 (0:00:00.173) 0:00:00.173 ******* 2026-02-08 02:53:36.596146 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-08 02:53:36.596158 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-08 02:53:36.596169 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-08 02:53:36.596181 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-08 02:53:36.596192 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-08 02:53:36.596203 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-08 02:53:36.596246 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-08 02:53:36.596257 | orchestrator | 2026-02-08 02:53:36.596268 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-08 02:53:36.596280 | orchestrator | Sunday 08 February 2026 02:53:29 +0000 (0:00:05.735) 0:00:05.908 ******* 2026-02-08 02:53:36.596293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-08 02:53:36.596306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-08 02:53:36.596318 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-08 02:53:36.596329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-08 02:53:36.596340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-08 02:53:36.596363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-08 02:53:36.596374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-08 02:53:36.596385 | orchestrator | 2026-02-08 02:53:36.596397 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:36.596408 | orchestrator | Sunday 08 February 2026 02:53:29 +0000 (0:00:00.155) 0:00:06.063 ******* 2026-02-08 02:53:36.596419 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCXi4EkwpbHPHB/ktIn1EY6wzEKdNIJbyhRpw6TML6qN1dimxfKrJP2Hl/tqdJwf+weh46SZtSOtuRBmD7GvJoo=) 2026-02-08 02:53:36.596441 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6En1hOQoTOsy8cxQJe5nEWYXG1+U/jGAJFyV5sv1GUczFt2LnqdAtDoYh0NXtETHlJ+JPRF/QqQFjA20igrNAJ1eb4WWKSbEKVx9q9diGpNcUuno6JOxvf0Ac2/uclZS4KL6zudANCZQ3rbIZ8nHW74FsRyQetODLz3ca8yVHNXzjVtf9k6Q2YCv0OBIN6T2R35t9yTazPy7MtaEgRiLLfVNOfBeswYFLr4C2jcheenfJFP7PutC/cch74gY5wcmYMlVGfwSpUBtx8/4vuaf5Y837mXrT1f2m52lcDfKa6lPiSAptGHhwFKzP301Q5JMq524SfUnaernUzQJkNvmrRLgAn+ILB0TMKnMDiuH16V6En7QQ/McH5+yTzPEA2IIJtqZ/DdMB/unUyIElXYbnrG4Qs4Y0BYET6Q38UgVBsr1d6DBurv0LgbKePsbeDyCjbv/LiG5JDhYVdPhvxeJqb3CBlGTPUv8Ec4ockEXHk+1m6QVQ1xZ0U6r65nVaTJk=) 2026-02-08 02:53:36.596477 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIjhJSMaYAzGdhq3VQNnJ8odvXAbMC8xiMZAIKey1RsV) 2026-02-08 02:53:36.596490 | orchestrator | 2026-02-08 02:53:36.596503 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:36.596516 | orchestrator | Sunday 08 February 2026 02:53:31 +0000 (0:00:01.217) 0:00:07.281 ******* 2026-02-08 02:53:36.596548 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4WH5CLWJPCvkwzAAA2wbqAFvu0+bvpkeV5xwDYwio47eIczsGT0ohLjkC/YD9aPxq8w8FGM1NHMPc2wSi4bUChI768NVJ1HGMc8Brzi229QIAteU1G/1HSzgBRqfEdIJN/qKgnk31kXLPaN/xVQKiHiIdXf38PDpepBLTibXALCVuo2dX0XcU/+QvYOps7jKeaKiY0BsZ999dRvwHLok0ip9kNd2BM0NxGcr1PvTVkDKsWd8UtWK7WskGnHVFVexym2zUrnEdQ5kwIueklVr9KqXjq5M60TPsUOHqoLb1mnW0IuAvdlsfCwOMBwng1xQGK92pKkTpGYnbzcoDmcHXwK15F00yzYMSp+UiWD0/8p+0Ky2SrEED0JUlA/VZqBvBLUVcOfzBfungp9r+Hk+XzehdWor6YUh1TtvC2qHATado6ALeyDb9Dmpi1t8GN71kJDmx0zmO7M+iPqTHoFp8kTxs526rjYQ+odLRY7D3tMY0Fn9z/R2WQQtzu3uI33U=) 2026-02-08 02:53:36.596562 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL/FFFhxvM6G9GZh0ihDbIkvNweKTdbO0h1WocSZQRle) 2026-02-08 02:53:36.596575 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOviXQzvzySY36SYZQ4xp9B661Zb4vH9Uz+WlHtcmovie2dXONfaAuV6ZjxD3ixUu/UhbCu6BYIL66ZxDjp61lw=) 2026-02-08 02:53:36.596588 | orchestrator | 2026-02-08 02:53:36.596601 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:36.596614 | orchestrator | Sunday 08 February 2026 02:53:32 +0000 (0:00:01.065) 0:00:08.347 ******* 2026-02-08 02:53:36.596626 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICa7AEOttHBXIzWzDpuDUgFHDaf+q5tccqq+0vSPbjfp) 2026-02-08 02:53:36.596639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEHbIABmVeBMFJLxWmkpaegCjTK5nqJrdkD67VUIkJBG4iGvCf2nvD6MK1VU7SW/l1vok7c9Aa+3KdAtKHiZ0RRap5W6kF9osNTLW8EJEkA0RRd574VPCTwXT+CH2VKD524Sfumb02Wbr6J6nU6jT8i7q5z5+CEHcDBqmWjIFgCKBoNPEvJ/3qLCkApoSFr5rp6m7oFbJ43jg6FZvgAYa229az2Alsbx48ZQwXOkreBr8TpQ3XqZ6oQnjmYJx7eyKKmpNhkP2QWbyC0CqmTQBCyfhtZdlWoVPz/F4I1aDqqt0xU+jBWJmXpHXf0duG3Ey2jp80471sYp0l/QdorPEBsymP1zBhy1J27A/UpRTMmSfRg+6bcubWeVGD1ECABjRIfw3yvikJ9yaSJ0hsRytQYJk7fqjx+ocnpfmf8S6kDZ6mfVZUG2ptr8Z53CPKY0xkn8iuacHBwB1TyRPW4gCCfDYEsOsVCzbzfe92k0gEPW+lxWFAHmbQdYZjCwqtfK8=) 2026-02-08 02:53:36.596653 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCne/17zIFZWuiRwNJf7u/hTEZnr2KZnx0jYWJkcPh3nsnMR124jnhwz8FNeA1KD0+S69NcF0kOSh0yBPyzdS0c=) 2026-02-08 02:53:36.596666 | orchestrator | 2026-02-08 02:53:36.596678 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:36.596691 | orchestrator | Sunday 08 February 2026 02:53:33 +0000 (0:00:01.092) 0:00:09.439 ******* 2026-02-08 02:53:36.596704 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuZ7QNXA2kH+0u1fb1C7Vlnxpiz3LR8Y7OlXrp6nvbpfyiCgfV50XZMQ6FmEMSrWoBORzO+RBRhqjCrCpHWbX0R+GSt80GspodmrQib5ZAvzeYMLWA8pWD/CWF+T8i4zK39hGR3x9/Ysm3NRfwW2uHVkXH+UY7cDNl+2W5a/+2Zr4PNLkuYEt18hdkA3i5Ww6JRwHB/mxmJZ9TtZD5NpoL792or8qCmIVPNsJfQTOI0GSJetgk2q/SpZA1ooo8djVDMNn2jVYX2m0WDIYDEpOGFw8N6HtL/ugxmmI4ZUExUMV2+5PFAG04z26VyswMUa3PZVkWVsJlycdHgKqWOkue1MblrsRI4iEia6qSGfkPSJFf+o0F0vynHdYUpdj1vWNWGBP6FaV02hM/xd51eJ6BECvOaARv+hUVkd7jBO2lmHxVhkYKMVbz0smOtvG7nDTYGxlcEvoFECsS/inSOZj3htw0jTSOvhRvQ034ZJk4UAuc6qY/u7y2SIlSLmOgtpE=) 2026-02-08 02:53:36.596725 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHS/IYyKUIcwmDs+phsgLo2A3aPPNaQvAEANxhBO+gvNa+XimPJJHe/J7A4nWKmZQGWfsXgrWQrES2p4K2bPMy8=) 2026-02-08 02:53:36.596739 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM2eoowLOaOxFmYPyZDVAXkZzilxRoG7BVWjmEdonYGW) 2026-02-08 02:53:36.596752 | orchestrator | 2026-02-08 02:53:36.596764 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:36.596777 | orchestrator | Sunday 08 February 2026 02:53:34 +0000 (0:00:01.092) 0:00:10.532 ******* 2026-02-08 02:53:36.596869 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzpXC60N2BUvwX08c0/MdLJrwjUJ/TVyoQZs2mzdtEEQ67bnKz+JAUcB2az+E9nCWBy8a/N1wNjvSxgbbLyvoaSch3VPBgOEVE9fENzwMTnb2aPB5N4WVVxcqUd+66WsNXov+7ZExJv4ND6IZmvVOxrOQnhHXtv/aUAgOx4LNom0asKqSNlYXf+IweX8XX3tt/ZP9I00q10ju9QDi9EXFr23mhZptfQMoDgiH+I6ZpsJLmVk++k+l50N45BMg2RLHtJbZr6CNYe8KaBshtvBbS3D/01RR/Fgz1OCcBm21QkutDN/WC2o8IbvotVspIyu86/fuNBCjTB82PxGFv6zDAKViFFBBNi8GKwDDxIbmjGDjFZCVXenqucM6UbIhCmUUV3CM1OwTE/1dVGTNbrx2XP5U7pADap3+G94B3G1fK5dHkJm2oC7CMnMsZhnTwbpLr93sMFdsYQIC3pH+pG/XS9TxWNlLkZpLj59GxIGUjePqPHBB6tnffl3KVciJn0p0=) 2026-02-08 02:53:36.596883 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJPgDEGuGp8Oe7o+Tj7tw7qq9L8aHVcoDk/h9wfBhF0H) 2026-02-08 02:53:36.596896 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOND0246yIUMsb4iFLww3L1KMcHJag1pggADtYcytfaan2lxSrz/2w1UCFpx2vvs4CoHNgp6FrrJXXH5nD60R7M=) 2026-02-08 02:53:36.596907 | orchestrator | 2026-02-08 02:53:36.596918 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:36.596929 | orchestrator | Sunday 08 February 2026 02:53:35 +0000 (0:00:01.110) 0:00:11.643 ******* 2026-02-08 02:53:36.596950 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDA1TaP4bduqNxzhiPVx8nWllvnQNwkCIo46z+SYXZFXZGzfm4KVZeOhBpXvdEbXPKkol2+FHvT3Wc847dWaNwLOMwRnmA+74/ly4qZ7xYuRHZ+ga2/wzKfYJ7joUEr75SwbWKVVdulnD6un+qodBJSW/pWtPRjFABdKLV2yvoCs650Mz3IQNf8ceijGkWAKHxRRwumO8xFeCHcsYKLBwjwOkIZrYSuTFScbwCFUqJwmHuYrDQ9YO89SfplFCk17vXdqeD6TtWFxI9/4qrbzu2cnPOC91CqQeQuHZOZnHnCRf5h+HM8QCDic1lNVluc3tvmgr4EMn81XyOH0vmFu4saIkY1UUOMbegwZQEIS/Zkv2/4eASbVEcFo503aBScypv2oYNi/9DtTHHp2lay8T16G8vt5w1mYzAUQbatOLc0hYShylbPQilA4p58Oshd6XABEGn8KtKH+zHhkeVnGWPeKw2PVqSiQOFRQsmR9c0Nx6Q8JOipDXc4Kqpa/p6X18M=) 2026-02-08 02:53:47.474488 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNyF4VRR+PAZ2gIMzK3QHnFkkj76pXpqlOBmSdXm0QKZQ0x82sE7ynbBAaH+G3BEd/Gz0RG1Yo+AaYysgwF3ODo=) 2026-02-08 02:53:47.474618 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM08brZJG6F7CM0vwCXBWe8vzO1l5CaCopaDsZOUp7iv) 2026-02-08 02:53:47.474637 | orchestrator | 2026-02-08 02:53:47.474651 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:47.474664 | orchestrator | Sunday 08 February 2026 02:53:36 +0000 (0:00:01.043) 0:00:12.687 ******* 2026-02-08 02:53:47.474677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3k4GPlTPbGlENZqggk6xlyv0vyxAWhJlEr0AR4GMeoCn12erVwEiooELYMO6uNbkJ66BAdqcfCY7zT1LS3mAUwJvm4wor5/uIvqyXZIeB89f54p+yeGFvHqT82ed3QnxfKpsbOItfNJNh3+oTKlDfjNkp3S0zVWVG8qry9wC2eGDwhVjMMhNjapMvn19aIsCbnN/umdqdajeTwKe6cOA6rit9yBx9MCuiDpGwqZ5N6vs+TT85R+3tM6fs0Wr3iiSAvrv7YjkDBqAgt2Y2Yrto7OpUex7fn8uKeEpmAaQUIuWcQRi4o4RwihIJU9KP1006vEcaa85nSxDSGG4Ay5Fhg9hA5rqAPsLr//nyPTAR6eU3dAJhrD1wd7dap/x/h3Y1cEKi+wjfsJn+FhadeTkcxK/u8OgU6gJpc46mTbX0IfaPiGrmQDnlqGyvf8rPImTOVCH3tjaLUeZrSU9fR2g3q0Pck2kRHUh7oNP/BUQF7MRAAtcPbnc+/YsFmEs7stM=) 2026-02-08 02:53:47.474691 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPtRhQ4dBYphujmd17WIYNsKHsZ0ibR0EAJUl6eaiimfkxfqX5CdncO2d55BTVtmKsXqg1z3aqrPTB42Xjkcnpg=) 2026-02-08 02:53:47.474728 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINoz6GVn3kmPtSLijxKUbWaxf04ABoL3U7iHH5YOzX2p) 2026-02-08 02:53:47.474739 | orchestrator | 2026-02-08 02:53:47.474750 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-08 02:53:47.474762 | orchestrator | Sunday 08 February 2026 02:53:37 +0000 (0:00:01.108) 0:00:13.796 ******* 2026-02-08 02:53:47.474773 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-08 02:53:47.474783 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-08 02:53:47.474794 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-08 02:53:47.474804 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-08 02:53:47.474814 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-08 02:53:47.474824 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-08 02:53:47.474834 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-08 02:53:47.474845 | orchestrator | 2026-02-08 02:53:47.474855 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-08 02:53:47.474868 | orchestrator | Sunday 08 February 2026 02:53:42 +0000 (0:00:05.193) 0:00:18.989 ******* 2026-02-08 02:53:47.474879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-08 02:53:47.474892 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-08 02:53:47.474903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-08 02:53:47.474913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-08 02:53:47.474924 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-08 02:53:47.474935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-08 02:53:47.474947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-08 02:53:47.474958 | orchestrator | 2026-02-08 02:53:47.474969 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:47.474979 | orchestrator | Sunday 08 February 2026 02:53:43 +0000 (0:00:00.186) 0:00:19.176 ******* 2026-02-08 02:53:47.475015 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6En1hOQoTOsy8cxQJe5nEWYXG1+U/jGAJFyV5sv1GUczFt2LnqdAtDoYh0NXtETHlJ+JPRF/QqQFjA20igrNAJ1eb4WWKSbEKVx9q9diGpNcUuno6JOxvf0Ac2/uclZS4KL6zudANCZQ3rbIZ8nHW74FsRyQetODLz3ca8yVHNXzjVtf9k6Q2YCv0OBIN6T2R35t9yTazPy7MtaEgRiLLfVNOfBeswYFLr4C2jcheenfJFP7PutC/cch74gY5wcmYMlVGfwSpUBtx8/4vuaf5Y837mXrT1f2m52lcDfKa6lPiSAptGHhwFKzP301Q5JMq524SfUnaernUzQJkNvmrRLgAn+ILB0TMKnMDiuH16V6En7QQ/McH5+yTzPEA2IIJtqZ/DdMB/unUyIElXYbnrG4Qs4Y0BYET6Q38UgVBsr1d6DBurv0LgbKePsbeDyCjbv/LiG5JDhYVdPhvxeJqb3CBlGTPUv8Ec4ockEXHk+1m6QVQ1xZ0U6r65nVaTJk=) 2026-02-08 02:53:47.475028 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCXi4EkwpbHPHB/ktIn1EY6wzEKdNIJbyhRpw6TML6qN1dimxfKrJP2Hl/tqdJwf+weh46SZtSOtuRBmD7GvJoo=) 2026-02-08 02:53:47.475058 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIjhJSMaYAzGdhq3VQNnJ8odvXAbMC8xiMZAIKey1RsV) 2026-02-08 02:53:47.475070 | orchestrator | 2026-02-08 02:53:47.475081 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:47.475093 | orchestrator | Sunday 08 February 2026 02:53:44 +0000 (0:00:01.132) 0:00:20.308 ******* 2026-02-08 02:53:47.475104 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL/FFFhxvM6G9GZh0ihDbIkvNweKTdbO0h1WocSZQRle) 2026-02-08 02:53:47.475116 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4WH5CLWJPCvkwzAAA2wbqAFvu0+bvpkeV5xwDYwio47eIczsGT0ohLjkC/YD9aPxq8w8FGM1NHMPc2wSi4bUChI768NVJ1HGMc8Brzi229QIAteU1G/1HSzgBRqfEdIJN/qKgnk31kXLPaN/xVQKiHiIdXf38PDpepBLTibXALCVuo2dX0XcU/+QvYOps7jKeaKiY0BsZ999dRvwHLok0ip9kNd2BM0NxGcr1PvTVkDKsWd8UtWK7WskGnHVFVexym2zUrnEdQ5kwIueklVr9KqXjq5M60TPsUOHqoLb1mnW0IuAvdlsfCwOMBwng1xQGK92pKkTpGYnbzcoDmcHXwK15F00yzYMSp+UiWD0/8p+0Ky2SrEED0JUlA/VZqBvBLUVcOfzBfungp9r+Hk+XzehdWor6YUh1TtvC2qHATado6ALeyDb9Dmpi1t8GN71kJDmx0zmO7M+iPqTHoFp8kTxs526rjYQ+odLRY7D3tMY0Fn9z/R2WQQtzu3uI33U=) 2026-02-08 02:53:47.475127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOviXQzvzySY36SYZQ4xp9B661Zb4vH9Uz+WlHtcmovie2dXONfaAuV6ZjxD3ixUu/UhbCu6BYIL66ZxDjp61lw=) 2026-02-08 02:53:47.475138 | orchestrator | 2026-02-08 02:53:47.475149 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:47.475161 | orchestrator | Sunday 08 February 2026 02:53:45 +0000 (0:00:01.099) 0:00:21.408 ******* 2026-02-08 02:53:47.475172 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICa7AEOttHBXIzWzDpuDUgFHDaf+q5tccqq+0vSPbjfp) 2026-02-08 02:53:47.475182 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEHbIABmVeBMFJLxWmkpaegCjTK5nqJrdkD67VUIkJBG4iGvCf2nvD6MK1VU7SW/l1vok7c9Aa+3KdAtKHiZ0RRap5W6kF9osNTLW8EJEkA0RRd574VPCTwXT+CH2VKD524Sfumb02Wbr6J6nU6jT8i7q5z5+CEHcDBqmWjIFgCKBoNPEvJ/3qLCkApoSFr5rp6m7oFbJ43jg6FZvgAYa229az2Alsbx48ZQwXOkreBr8TpQ3XqZ6oQnjmYJx7eyKKmpNhkP2QWbyC0CqmTQBCyfhtZdlWoVPz/F4I1aDqqt0xU+jBWJmXpHXf0duG3Ey2jp80471sYp0l/QdorPEBsymP1zBhy1J27A/UpRTMmSfRg+6bcubWeVGD1ECABjRIfw3yvikJ9yaSJ0hsRytQYJk7fqjx+ocnpfmf8S6kDZ6mfVZUG2ptr8Z53CPKY0xkn8iuacHBwB1TyRPW4gCCfDYEsOsVCzbzfe92k0gEPW+lxWFAHmbQdYZjCwqtfK8=) 2026-02-08 02:53:47.475193 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCne/17zIFZWuiRwNJf7u/hTEZnr2KZnx0jYWJkcPh3nsnMR124jnhwz8FNeA1KD0+S69NcF0kOSh0yBPyzdS0c=) 2026-02-08 02:53:47.475225 | orchestrator | 2026-02-08 02:53:47.475238 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:47.475248 | orchestrator | Sunday 08 February 2026 02:53:46 +0000 (0:00:01.085) 0:00:22.494 ******* 2026-02-08 02:53:47.475259 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM2eoowLOaOxFmYPyZDVAXkZzilxRoG7BVWjmEdonYGW) 2026-02-08 02:53:47.475270 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuZ7QNXA2kH+0u1fb1C7Vlnxpiz3LR8Y7OlXrp6nvbpfyiCgfV50XZMQ6FmEMSrWoBORzO+RBRhqjCrCpHWbX0R+GSt80GspodmrQib5ZAvzeYMLWA8pWD/CWF+T8i4zK39hGR3x9/Ysm3NRfwW2uHVkXH+UY7cDNl+2W5a/+2Zr4PNLkuYEt18hdkA3i5Ww6JRwHB/mxmJZ9TtZD5NpoL792or8qCmIVPNsJfQTOI0GSJetgk2q/SpZA1ooo8djVDMNn2jVYX2m0WDIYDEpOGFw8N6HtL/ugxmmI4ZUExUMV2+5PFAG04z26VyswMUa3PZVkWVsJlycdHgKqWOkue1MblrsRI4iEia6qSGfkPSJFf+o0F0vynHdYUpdj1vWNWGBP6FaV02hM/xd51eJ6BECvOaARv+hUVkd7jBO2lmHxVhkYKMVbz0smOtvG7nDTYGxlcEvoFECsS/inSOZj3htw0jTSOvhRvQ034ZJk4UAuc6qY/u7y2SIlSLmOgtpE=) 2026-02-08 02:53:47.475291 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHS/IYyKUIcwmDs+phsgLo2A3aPPNaQvAEANxhBO+gvNa+XimPJJHe/J7A4nWKmZQGWfsXgrWQrES2p4K2bPMy8=) 2026-02-08 02:53:52.076402 | orchestrator | 2026-02-08 02:53:52.076490 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:52.076502 | orchestrator | Sunday 08 February 2026 02:53:47 +0000 (0:00:01.070) 0:00:23.564 ******* 2026-02-08 02:53:52.076510 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOND0246yIUMsb4iFLww3L1KMcHJag1pggADtYcytfaan2lxSrz/2w1UCFpx2vvs4CoHNgp6FrrJXXH5nD60R7M=) 2026-02-08 02:53:52.076522 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzpXC60N2BUvwX08c0/MdLJrwjUJ/TVyoQZs2mzdtEEQ67bnKz+JAUcB2az+E9nCWBy8a/N1wNjvSxgbbLyvoaSch3VPBgOEVE9fENzwMTnb2aPB5N4WVVxcqUd+66WsNXov+7ZExJv4ND6IZmvVOxrOQnhHXtv/aUAgOx4LNom0asKqSNlYXf+IweX8XX3tt/ZP9I00q10ju9QDi9EXFr23mhZptfQMoDgiH+I6ZpsJLmVk++k+l50N45BMg2RLHtJbZr6CNYe8KaBshtvBbS3D/01RR/Fgz1OCcBm21QkutDN/WC2o8IbvotVspIyu86/fuNBCjTB82PxGFv6zDAKViFFBBNi8GKwDDxIbmjGDjFZCVXenqucM6UbIhCmUUV3CM1OwTE/1dVGTNbrx2XP5U7pADap3+G94B3G1fK5dHkJm2oC7CMnMsZhnTwbpLr93sMFdsYQIC3pH+pG/XS9TxWNlLkZpLj59GxIGUjePqPHBB6tnffl3KVciJn0p0=) 2026-02-08 02:53:52.076532 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJPgDEGuGp8Oe7o+Tj7tw7qq9L8aHVcoDk/h9wfBhF0H) 2026-02-08 02:53:52.076541 | orchestrator | 2026-02-08 02:53:52.076548 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:52.076556 | orchestrator | Sunday 08 February 2026 02:53:48 +0000 (0:00:01.135) 0:00:24.700 ******* 2026-02-08 02:53:52.076563 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDA1TaP4bduqNxzhiPVx8nWllvnQNwkCIo46z+SYXZFXZGzfm4KVZeOhBpXvdEbXPKkol2+FHvT3Wc847dWaNwLOMwRnmA+74/ly4qZ7xYuRHZ+ga2/wzKfYJ7joUEr75SwbWKVVdulnD6un+qodBJSW/pWtPRjFABdKLV2yvoCs650Mz3IQNf8ceijGkWAKHxRRwumO8xFeCHcsYKLBwjwOkIZrYSuTFScbwCFUqJwmHuYrDQ9YO89SfplFCk17vXdqeD6TtWFxI9/4qrbzu2cnPOC91CqQeQuHZOZnHnCRf5h+HM8QCDic1lNVluc3tvmgr4EMn81XyOH0vmFu4saIkY1UUOMbegwZQEIS/Zkv2/4eASbVEcFo503aBScypv2oYNi/9DtTHHp2lay8T16G8vt5w1mYzAUQbatOLc0hYShylbPQilA4p58Oshd6XABEGn8KtKH+zHhkeVnGWPeKw2PVqSiQOFRQsmR9c0Nx6Q8JOipDXc4Kqpa/p6X18M=) 2026-02-08 02:53:52.076571 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNyF4VRR+PAZ2gIMzK3QHnFkkj76pXpqlOBmSdXm0QKZQ0x82sE7ynbBAaH+G3BEd/Gz0RG1Yo+AaYysgwF3ODo=) 2026-02-08 02:53:52.076579 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM08brZJG6F7CM0vwCXBWe8vzO1l5CaCopaDsZOUp7iv) 2026-02-08 02:53:52.076586 | orchestrator | 2026-02-08 02:53:52.076593 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-08 02:53:52.076601 | orchestrator | Sunday 08 February 2026 02:53:49 +0000 (0:00:01.126) 0:00:25.826 ******* 2026-02-08 02:53:52.076608 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPtRhQ4dBYphujmd17WIYNsKHsZ0ibR0EAJUl6eaiimfkxfqX5CdncO2d55BTVtmKsXqg1z3aqrPTB42Xjkcnpg=) 2026-02-08 02:53:52.076616 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINoz6GVn3kmPtSLijxKUbWaxf04ABoL3U7iHH5YOzX2p) 2026-02-08 02:53:52.076639 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3k4GPlTPbGlENZqggk6xlyv0vyxAWhJlEr0AR4GMeoCn12erVwEiooELYMO6uNbkJ66BAdqcfCY7zT1LS3mAUwJvm4wor5/uIvqyXZIeB89f54p+yeGFvHqT82ed3QnxfKpsbOItfNJNh3+oTKlDfjNkp3S0zVWVG8qry9wC2eGDwhVjMMhNjapMvn19aIsCbnN/umdqdajeTwKe6cOA6rit9yBx9MCuiDpGwqZ5N6vs+TT85R+3tM6fs0Wr3iiSAvrv7YjkDBqAgt2Y2Yrto7OpUex7fn8uKeEpmAaQUIuWcQRi4o4RwihIJU9KP1006vEcaa85nSxDSGG4Ay5Fhg9hA5rqAPsLr//nyPTAR6eU3dAJhrD1wd7dap/x/h3Y1cEKi+wjfsJn+FhadeTkcxK/u8OgU6gJpc46mTbX0IfaPiGrmQDnlqGyvf8rPImTOVCH3tjaLUeZrSU9fR2g3q0Pck2kRHUh7oNP/BUQF7MRAAtcPbnc+/YsFmEs7stM=) 2026-02-08 02:53:52.076647 | orchestrator | 2026-02-08 02:53:52.076655 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-08 02:53:52.076679 | orchestrator | Sunday 08 February 2026 02:53:50 +0000 (0:00:01.091) 0:00:26.918 ******* 2026-02-08 02:53:52.076688 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-08 02:53:52.076695 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-08 02:53:52.076702 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-08 02:53:52.076710 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-08 02:53:52.076717 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-08 02:53:52.076724 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-08 02:53:52.076731 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-08 02:53:52.076739 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:53:52.076746 | orchestrator | 2026-02-08 02:53:52.076767 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-08 02:53:52.076775 | orchestrator | Sunday 08 February 2026 02:53:50 +0000 (0:00:00.179) 0:00:27.098 ******* 2026-02-08 02:53:52.076782 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:53:52.076790 | orchestrator | 2026-02-08 02:53:52.076797 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-08 02:53:52.076804 | orchestrator | Sunday 08 February 2026 02:53:51 +0000 (0:00:00.057) 0:00:27.155 ******* 2026-02-08 02:53:52.076817 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:53:52.076824 | orchestrator | 2026-02-08 02:53:52.076831 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-08 02:53:52.076838 | orchestrator | Sunday 08 February 2026 02:53:51 +0000 (0:00:00.061) 0:00:27.217 ******* 2026-02-08 02:53:52.076846 | orchestrator | changed: [testbed-manager] 2026-02-08 02:53:52.076853 | orchestrator | 2026-02-08 02:53:52.076860 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 02:53:52.076868 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 02:53:52.076876 | orchestrator | 2026-02-08 02:53:52.076883 | orchestrator | 2026-02-08 02:53:52.076890 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 02:53:52.076898 | orchestrator | Sunday 08 February 2026 02:53:51 +0000 (0:00:00.730) 0:00:27.948 ******* 2026-02-08 02:53:52.076905 | orchestrator | =============================================================================== 2026-02-08 02:53:52.076912 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.74s 2026-02-08 02:53:52.076919 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2026-02-08 02:53:52.076928 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-02-08 02:53:52.076935 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-08 02:53:52.076942 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-08 02:53:52.076949 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-08 02:53:52.076959 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-08 02:53:52.076967 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-08 02:53:52.076976 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-08 02:53:52.076984 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-08 02:53:52.076993 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-08 02:53:52.077001 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-08 02:53:52.077010 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-02-08 02:53:52.077018 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-08 02:53:52.077032 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-08 02:53:52.077041 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-08 02:53:52.077049 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.73s 2026-02-08 02:53:52.077058 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-02-08 02:53:52.077067 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-02-08 02:53:52.077076 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-02-08 02:53:52.383834 | orchestrator | + osism apply squid 2026-02-08 02:54:04.511851 | orchestrator | 2026-02-08 02:54:04 | INFO  | Task d80da1ef-e397-4a02-844e-3ea8e187802f (squid) was prepared for execution. 2026-02-08 02:54:04.511975 | orchestrator | 2026-02-08 02:54:04 | INFO  | It takes a moment until task d80da1ef-e397-4a02-844e-3ea8e187802f (squid) has been started and output is visible here. 2026-02-08 02:55:57.714964 | orchestrator | 2026-02-08 02:55:57.715103 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-08 02:55:57.715130 | orchestrator | 2026-02-08 02:55:57.715149 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-08 02:55:57.715168 | orchestrator | Sunday 08 February 2026 02:54:08 +0000 (0:00:00.162) 0:00:00.162 ******* 2026-02-08 02:55:57.715219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 02:55:57.715240 | orchestrator | 2026-02-08 02:55:57.715258 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-08 02:55:57.715277 | orchestrator | Sunday 08 February 2026 02:54:08 +0000 (0:00:00.084) 0:00:00.246 ******* 2026-02-08 02:55:57.715295 | orchestrator | ok: [testbed-manager] 2026-02-08 02:55:57.715314 | orchestrator | 2026-02-08 02:55:57.715331 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-08 02:55:57.715348 | orchestrator | Sunday 08 February 2026 02:54:10 +0000 (0:00:01.507) 0:00:01.754 ******* 2026-02-08 02:55:57.715368 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-08 02:55:57.715387 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-08 02:55:57.715406 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-08 02:55:57.715425 | orchestrator | 2026-02-08 02:55:57.715444 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-08 02:55:57.715464 | orchestrator | Sunday 08 February 2026 02:54:11 +0000 (0:00:01.177) 0:00:02.932 ******* 2026-02-08 02:55:57.715483 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-08 02:55:57.715503 | orchestrator | 2026-02-08 02:55:57.715523 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-08 02:55:57.715543 | orchestrator | Sunday 08 February 2026 02:54:12 +0000 (0:00:01.056) 0:00:03.988 ******* 2026-02-08 02:55:57.715563 | orchestrator | ok: [testbed-manager] 2026-02-08 02:55:57.715581 | orchestrator | 2026-02-08 02:55:57.715595 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-08 02:55:57.715610 | orchestrator | Sunday 08 February 2026 02:54:12 +0000 (0:00:00.345) 0:00:04.334 ******* 2026-02-08 02:55:57.715631 | orchestrator | changed: [testbed-manager] 2026-02-08 02:55:57.715742 | orchestrator | 2026-02-08 02:55:57.715762 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-08 02:55:57.715782 | orchestrator | Sunday 08 February 2026 02:54:13 +0000 (0:00:00.863) 0:00:05.197 ******* 2026-02-08 02:55:57.715802 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-08 02:55:57.715824 | orchestrator | ok: [testbed-manager] 2026-02-08 02:55:57.715836 | orchestrator | 2026-02-08 02:55:57.715847 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-08 02:55:57.715892 | orchestrator | Sunday 08 February 2026 02:54:44 +0000 (0:00:30.984) 0:00:36.181 ******* 2026-02-08 02:55:57.715904 | orchestrator | changed: [testbed-manager] 2026-02-08 02:55:57.715914 | orchestrator | 2026-02-08 02:55:57.715925 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-08 02:55:57.715936 | orchestrator | Sunday 08 February 2026 02:54:56 +0000 (0:00:11.920) 0:00:48.102 ******* 2026-02-08 02:55:57.715948 | orchestrator | Pausing for 60 seconds 2026-02-08 02:55:57.715959 | orchestrator | changed: [testbed-manager] 2026-02-08 02:55:57.715970 | orchestrator | 2026-02-08 02:55:57.715981 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-08 02:55:57.715992 | orchestrator | Sunday 08 February 2026 02:55:56 +0000 (0:01:00.079) 0:01:48.181 ******* 2026-02-08 02:55:57.716003 | orchestrator | ok: [testbed-manager] 2026-02-08 02:55:57.716014 | orchestrator | 2026-02-08 02:55:57.716025 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-08 02:55:57.716035 | orchestrator | Sunday 08 February 2026 02:55:56 +0000 (0:00:00.059) 0:01:48.241 ******* 2026-02-08 02:55:57.716046 | orchestrator | changed: [testbed-manager] 2026-02-08 02:55:57.716057 | orchestrator | 2026-02-08 02:55:57.716076 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 02:55:57.716094 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 02:55:57.716112 | orchestrator | 2026-02-08 02:55:57.716129 | orchestrator | 2026-02-08 02:55:57.716145 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 02:55:57.716163 | orchestrator | Sunday 08 February 2026 02:55:57 +0000 (0:00:00.630) 0:01:48.871 ******* 2026-02-08 02:55:57.716221 | orchestrator | =============================================================================== 2026-02-08 02:55:57.716243 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-02-08 02:55:57.716261 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.98s 2026-02-08 02:55:57.716272 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.92s 2026-02-08 02:55:57.716283 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.51s 2026-02-08 02:55:57.716294 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2026-02-08 02:55:57.716325 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.06s 2026-02-08 02:55:57.716336 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.86s 2026-02-08 02:55:57.716347 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2026-02-08 02:55:57.716358 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-02-08 02:55:57.716369 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-02-08 02:55:57.716380 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-02-08 02:55:58.049257 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-08 02:55:58.049528 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-08 02:55:58.097380 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-08 02:55:58.097488 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-08 02:55:58.103788 | orchestrator | + set -e 2026-02-08 02:55:58.103844 | orchestrator | + NAMESPACE=kolla/release 2026-02-08 02:55:58.103860 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-08 02:55:58.111001 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-08 02:55:58.179247 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-08 02:55:58.180286 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-08 02:56:10.292966 | orchestrator | 2026-02-08 02:56:10 | INFO  | Task 4df77e56-3acb-45d1-8ac5-9579e194d484 (operator) was prepared for execution. 2026-02-08 02:56:10.293069 | orchestrator | 2026-02-08 02:56:10 | INFO  | It takes a moment until task 4df77e56-3acb-45d1-8ac5-9579e194d484 (operator) has been started and output is visible here. 2026-02-08 02:56:25.627868 | orchestrator | 2026-02-08 02:56:25.627967 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-08 02:56:25.627979 | orchestrator | 2026-02-08 02:56:25.627986 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 02:56:25.627992 | orchestrator | Sunday 08 February 2026 02:56:14 +0000 (0:00:00.148) 0:00:00.148 ******* 2026-02-08 02:56:25.627997 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:56:25.628003 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:56:25.628008 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:56:25.628013 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:56:25.628018 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:56:25.628023 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:56:25.628027 | orchestrator | 2026-02-08 02:56:25.628033 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-08 02:56:25.628038 | orchestrator | Sunday 08 February 2026 02:56:17 +0000 (0:00:02.996) 0:00:03.145 ******* 2026-02-08 02:56:25.628042 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:56:25.628047 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:56:25.628052 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:56:25.628075 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:56:25.628084 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:56:25.628092 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:56:25.628100 | orchestrator | 2026-02-08 02:56:25.628108 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-08 02:56:25.628117 | orchestrator | 2026-02-08 02:56:25.628125 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-08 02:56:25.628132 | orchestrator | Sunday 08 February 2026 02:56:18 +0000 (0:00:00.721) 0:00:03.866 ******* 2026-02-08 02:56:25.628141 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:56:25.628149 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:56:25.628157 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:56:25.628164 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:56:25.628168 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:56:25.628174 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:56:25.628211 | orchestrator | 2026-02-08 02:56:25.628221 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-08 02:56:25.628229 | orchestrator | Sunday 08 February 2026 02:56:18 +0000 (0:00:00.201) 0:00:04.068 ******* 2026-02-08 02:56:25.628237 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:56:25.628245 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:56:25.628253 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:56:25.628262 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:56:25.628269 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:56:25.628277 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:56:25.628286 | orchestrator | 2026-02-08 02:56:25.628293 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-08 02:56:25.628298 | orchestrator | Sunday 08 February 2026 02:56:18 +0000 (0:00:00.180) 0:00:04.248 ******* 2026-02-08 02:56:25.628303 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:56:25.628313 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:56:25.628321 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:56:25.628329 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:56:25.628337 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:56:25.628344 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:56:25.628353 | orchestrator | 2026-02-08 02:56:25.628361 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-08 02:56:25.628368 | orchestrator | Sunday 08 February 2026 02:56:19 +0000 (0:00:00.593) 0:00:04.842 ******* 2026-02-08 02:56:25.628376 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:56:25.628385 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:56:25.628393 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:56:25.628401 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:56:25.628409 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:56:25.628418 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:56:25.628443 | orchestrator | 2026-02-08 02:56:25.628451 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-08 02:56:25.628460 | orchestrator | Sunday 08 February 2026 02:56:19 +0000 (0:00:00.760) 0:00:05.602 ******* 2026-02-08 02:56:25.628468 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-08 02:56:25.628476 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-08 02:56:25.628484 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-08 02:56:25.628492 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-08 02:56:25.628500 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-08 02:56:25.628509 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-08 02:56:25.628517 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-08 02:56:25.628525 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-08 02:56:25.628533 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-08 02:56:25.628541 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-08 02:56:25.628549 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-08 02:56:25.628557 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-08 02:56:25.628564 | orchestrator | 2026-02-08 02:56:25.628572 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-08 02:56:25.628580 | orchestrator | Sunday 08 February 2026 02:56:21 +0000 (0:00:01.213) 0:00:06.816 ******* 2026-02-08 02:56:25.628587 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:56:25.628595 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:56:25.628604 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:56:25.628611 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:56:25.628619 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:56:25.628627 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:56:25.628635 | orchestrator | 2026-02-08 02:56:25.628643 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-08 02:56:25.628653 | orchestrator | Sunday 08 February 2026 02:56:22 +0000 (0:00:01.130) 0:00:07.947 ******* 2026-02-08 02:56:25.628661 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-08 02:56:25.628668 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-08 02:56:25.628676 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-08 02:56:25.628685 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-08 02:56:25.628707 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-08 02:56:25.628715 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-08 02:56:25.628723 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-08 02:56:25.628731 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-08 02:56:25.628739 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-08 02:56:25.628747 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-08 02:56:25.628755 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-08 02:56:25.628763 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-08 02:56:25.628771 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-08 02:56:25.628779 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-08 02:56:25.628787 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-08 02:56:25.628795 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-08 02:56:25.628803 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-08 02:56:25.628810 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-08 02:56:25.628818 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-08 02:56:25.628826 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-08 02:56:25.628840 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-08 02:56:25.628849 | orchestrator | 2026-02-08 02:56:25.628857 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-08 02:56:25.628865 | orchestrator | Sunday 08 February 2026 02:56:23 +0000 (0:00:01.207) 0:00:09.154 ******* 2026-02-08 02:56:25.628873 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:56:25.628881 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:56:25.628888 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:56:25.628896 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:56:25.628904 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:56:25.628912 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:56:25.628920 | orchestrator | 2026-02-08 02:56:25.628928 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-08 02:56:25.628936 | orchestrator | Sunday 08 February 2026 02:56:23 +0000 (0:00:00.152) 0:00:09.306 ******* 2026-02-08 02:56:25.628944 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:56:25.628952 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:56:25.628960 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:56:25.628968 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:56:25.628976 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:56:25.628983 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:56:25.628991 | orchestrator | 2026-02-08 02:56:25.628999 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-08 02:56:25.629007 | orchestrator | Sunday 08 February 2026 02:56:23 +0000 (0:00:00.198) 0:00:09.505 ******* 2026-02-08 02:56:25.629015 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:56:25.629023 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:56:25.629031 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:56:25.629039 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:56:25.629047 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:56:25.629054 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:56:25.629062 | orchestrator | 2026-02-08 02:56:25.629070 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-08 02:56:25.629078 | orchestrator | Sunday 08 February 2026 02:56:24 +0000 (0:00:00.573) 0:00:10.078 ******* 2026-02-08 02:56:25.629087 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:56:25.629094 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:56:25.629102 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:56:25.629110 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:56:25.629118 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:56:25.629126 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:56:25.629134 | orchestrator | 2026-02-08 02:56:25.629142 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-08 02:56:25.629150 | orchestrator | Sunday 08 February 2026 02:56:24 +0000 (0:00:00.161) 0:00:10.240 ******* 2026-02-08 02:56:25.629158 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-08 02:56:25.629165 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:56:25.629173 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-08 02:56:25.629197 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-08 02:56:25.629205 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:56:25.629221 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:56:25.629230 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-08 02:56:25.629238 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:56:25.629246 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-08 02:56:25.629254 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:56:25.629262 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-08 02:56:25.629271 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:56:25.629279 | orchestrator | 2026-02-08 02:56:25.629287 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-08 02:56:25.629296 | orchestrator | Sunday 08 February 2026 02:56:25 +0000 (0:00:00.674) 0:00:10.915 ******* 2026-02-08 02:56:25.629309 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:56:25.629317 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:56:25.629326 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:56:25.629334 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:56:25.629342 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:56:25.629351 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:56:25.629359 | orchestrator | 2026-02-08 02:56:25.629367 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-08 02:56:25.629375 | orchestrator | Sunday 08 February 2026 02:56:25 +0000 (0:00:00.195) 0:00:11.110 ******* 2026-02-08 02:56:25.629383 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:56:25.629391 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:56:25.629399 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:56:25.629407 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:56:25.629421 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:56:26.859647 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:56:26.859739 | orchestrator | 2026-02-08 02:56:26.859753 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-08 02:56:26.859763 | orchestrator | Sunday 08 February 2026 02:56:25 +0000 (0:00:00.191) 0:00:11.302 ******* 2026-02-08 02:56:26.859772 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:56:26.859781 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:56:26.859790 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:56:26.859798 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:56:26.859807 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:56:26.859816 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:56:26.859824 | orchestrator | 2026-02-08 02:56:26.859833 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-08 02:56:26.859842 | orchestrator | Sunday 08 February 2026 02:56:25 +0000 (0:00:00.174) 0:00:11.477 ******* 2026-02-08 02:56:26.859850 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:56:26.859859 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:56:26.859885 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:56:26.859894 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:56:26.859903 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:56:26.859911 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:56:26.859920 | orchestrator | 2026-02-08 02:56:26.859929 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-08 02:56:26.859937 | orchestrator | Sunday 08 February 2026 02:56:26 +0000 (0:00:00.566) 0:00:12.043 ******* 2026-02-08 02:56:26.859946 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:56:26.859955 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:56:26.859964 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:56:26.859972 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:56:26.859981 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:56:26.859989 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:56:26.859998 | orchestrator | 2026-02-08 02:56:26.860007 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 02:56:26.860016 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 02:56:26.860026 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 02:56:26.860035 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 02:56:26.860044 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 02:56:26.860053 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 02:56:26.860083 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 02:56:26.860092 | orchestrator | 2026-02-08 02:56:26.860101 | orchestrator | 2026-02-08 02:56:26.860109 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 02:56:26.860118 | orchestrator | Sunday 08 February 2026 02:56:26 +0000 (0:00:00.231) 0:00:12.275 ******* 2026-02-08 02:56:26.860127 | orchestrator | =============================================================================== 2026-02-08 02:56:26.860135 | orchestrator | Gathering Facts --------------------------------------------------------- 3.00s 2026-02-08 02:56:26.860144 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2026-02-08 02:56:26.860153 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.21s 2026-02-08 02:56:26.860162 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.13s 2026-02-08 02:56:26.860170 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.76s 2026-02-08 02:56:26.860244 | orchestrator | Do not require tty for all users ---------------------------------------- 0.72s 2026-02-08 02:56:26.860263 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.67s 2026-02-08 02:56:26.860277 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2026-02-08 02:56:26.860291 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-02-08 02:56:26.860306 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.57s 2026-02-08 02:56:26.860321 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-02-08 02:56:26.860335 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-02-08 02:56:26.860350 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-02-08 02:56:26.860366 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2026-02-08 02:56:26.860381 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-02-08 02:56:26.860396 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-02-08 02:56:26.860409 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-08 02:56:26.860419 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-02-08 02:56:26.860429 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-02-08 02:56:27.187795 | orchestrator | + osism apply --environment custom facts 2026-02-08 02:56:29.054438 | orchestrator | 2026-02-08 02:56:29 | INFO  | Trying to run play facts in environment custom 2026-02-08 02:56:39.230706 | orchestrator | 2026-02-08 02:56:39 | INFO  | Task 0d68ef2e-8793-407d-97a9-00d86a571470 (facts) was prepared for execution. 2026-02-08 02:56:39.230841 | orchestrator | 2026-02-08 02:56:39 | INFO  | It takes a moment until task 0d68ef2e-8793-407d-97a9-00d86a571470 (facts) has been started and output is visible here. 2026-02-08 02:57:18.664993 | orchestrator | 2026-02-08 02:57:18.665086 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-08 02:57:18.665098 | orchestrator | 2026-02-08 02:57:18.665106 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-08 02:57:18.665114 | orchestrator | Sunday 08 February 2026 02:56:43 +0000 (0:00:00.089) 0:00:00.089 ******* 2026-02-08 02:57:18.665122 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:18.665131 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:18.665139 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:57:18.665146 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:57:18.665154 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:18.665161 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:57:18.665235 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:18.665245 | orchestrator | 2026-02-08 02:57:18.665252 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-08 02:57:18.665260 | orchestrator | Sunday 08 February 2026 02:56:44 +0000 (0:00:01.281) 0:00:01.371 ******* 2026-02-08 02:57:18.665267 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:18.665274 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:18.665282 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:57:18.665289 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:57:18.665296 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:18.665303 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:18.665310 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:57:18.665317 | orchestrator | 2026-02-08 02:57:18.665324 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-08 02:57:18.665332 | orchestrator | 2026-02-08 02:57:18.665340 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-08 02:57:18.665352 | orchestrator | Sunday 08 February 2026 02:56:45 +0000 (0:00:01.146) 0:00:02.518 ******* 2026-02-08 02:57:18.665369 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:18.665383 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:18.665395 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:18.665406 | orchestrator | 2026-02-08 02:57:18.665418 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-08 02:57:18.665430 | orchestrator | Sunday 08 February 2026 02:56:45 +0000 (0:00:00.117) 0:00:02.635 ******* 2026-02-08 02:57:18.665441 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:18.665452 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:18.665463 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:18.665475 | orchestrator | 2026-02-08 02:57:18.665485 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-08 02:57:18.665496 | orchestrator | Sunday 08 February 2026 02:56:46 +0000 (0:00:00.210) 0:00:02.846 ******* 2026-02-08 02:57:18.665507 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:18.665518 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:18.665528 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:18.665538 | orchestrator | 2026-02-08 02:57:18.665549 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-08 02:57:18.665562 | orchestrator | Sunday 08 February 2026 02:56:46 +0000 (0:00:00.211) 0:00:03.058 ******* 2026-02-08 02:57:18.665574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 02:57:18.665586 | orchestrator | 2026-02-08 02:57:18.665598 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-08 02:57:18.665609 | orchestrator | Sunday 08 February 2026 02:56:46 +0000 (0:00:00.172) 0:00:03.230 ******* 2026-02-08 02:57:18.665620 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:18.665631 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:18.665642 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:18.665653 | orchestrator | 2026-02-08 02:57:18.665665 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-08 02:57:18.665677 | orchestrator | Sunday 08 February 2026 02:56:46 +0000 (0:00:00.402) 0:00:03.633 ******* 2026-02-08 02:57:18.665689 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:57:18.665701 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:57:18.665713 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:57:18.665724 | orchestrator | 2026-02-08 02:57:18.665736 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-08 02:57:18.665748 | orchestrator | Sunday 08 February 2026 02:56:47 +0000 (0:00:00.136) 0:00:03.769 ******* 2026-02-08 02:57:18.665761 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:18.665773 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:18.665784 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:18.665791 | orchestrator | 2026-02-08 02:57:18.665798 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-08 02:57:18.665816 | orchestrator | Sunday 08 February 2026 02:56:48 +0000 (0:00:00.968) 0:00:04.738 ******* 2026-02-08 02:57:18.665824 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:18.665831 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:18.665838 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:18.665845 | orchestrator | 2026-02-08 02:57:18.665852 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-08 02:57:18.665859 | orchestrator | Sunday 08 February 2026 02:56:48 +0000 (0:00:00.435) 0:00:05.173 ******* 2026-02-08 02:57:18.665867 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:18.665874 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:18.665881 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:18.665888 | orchestrator | 2026-02-08 02:57:18.665895 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-08 02:57:18.665902 | orchestrator | Sunday 08 February 2026 02:56:49 +0000 (0:00:00.904) 0:00:06.078 ******* 2026-02-08 02:57:18.665909 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:18.665916 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:18.665924 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:18.665931 | orchestrator | 2026-02-08 02:57:18.665982 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-08 02:57:18.665991 | orchestrator | Sunday 08 February 2026 02:57:03 +0000 (0:00:14.166) 0:00:20.245 ******* 2026-02-08 02:57:18.665998 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:57:18.666005 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:57:18.666075 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:57:18.666092 | orchestrator | 2026-02-08 02:57:18.666139 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-08 02:57:18.666194 | orchestrator | Sunday 08 February 2026 02:57:03 +0000 (0:00:00.113) 0:00:20.358 ******* 2026-02-08 02:57:18.666209 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:18.666216 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:18.666223 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:18.666231 | orchestrator | 2026-02-08 02:57:18.666238 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-08 02:57:18.666251 | orchestrator | Sunday 08 February 2026 02:57:10 +0000 (0:00:06.499) 0:00:26.858 ******* 2026-02-08 02:57:18.666258 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:18.666265 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:18.666273 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:18.666280 | orchestrator | 2026-02-08 02:57:18.666287 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-08 02:57:18.666294 | orchestrator | Sunday 08 February 2026 02:57:10 +0000 (0:00:00.468) 0:00:27.327 ******* 2026-02-08 02:57:18.666301 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-08 02:57:18.666309 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-08 02:57:18.666317 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-08 02:57:18.666324 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-08 02:57:18.666331 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-08 02:57:18.666338 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-08 02:57:18.666348 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-08 02:57:18.666364 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-08 02:57:18.666380 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-08 02:57:18.666392 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-08 02:57:18.666403 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-08 02:57:18.666414 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-08 02:57:18.666425 | orchestrator | 2026-02-08 02:57:18.666437 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-08 02:57:18.666459 | orchestrator | Sunday 08 February 2026 02:57:13 +0000 (0:00:03.325) 0:00:30.653 ******* 2026-02-08 02:57:18.666471 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:18.666481 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:18.666494 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:18.666506 | orchestrator | 2026-02-08 02:57:18.666517 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-08 02:57:18.666529 | orchestrator | 2026-02-08 02:57:18.666541 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-08 02:57:18.666553 | orchestrator | Sunday 08 February 2026 02:57:15 +0000 (0:00:01.244) 0:00:31.897 ******* 2026-02-08 02:57:18.666564 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:18.666576 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:18.666588 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:18.666600 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:18.666612 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:18.666624 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:18.666638 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:18.666650 | orchestrator | 2026-02-08 02:57:18.666663 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 02:57:18.666676 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 02:57:18.666686 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 02:57:18.666695 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 02:57:18.666702 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 02:57:18.666709 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 02:57:18.666716 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 02:57:18.666723 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 02:57:18.666731 | orchestrator | 2026-02-08 02:57:18.666738 | orchestrator | 2026-02-08 02:57:18.666745 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 02:57:18.666752 | orchestrator | Sunday 08 February 2026 02:57:18 +0000 (0:00:03.480) 0:00:35.378 ******* 2026-02-08 02:57:18.666759 | orchestrator | =============================================================================== 2026-02-08 02:57:18.666769 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.17s 2026-02-08 02:57:18.666781 | orchestrator | Install required packages (Debian) -------------------------------------- 6.50s 2026-02-08 02:57:18.666793 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.48s 2026-02-08 02:57:18.666806 | orchestrator | Copy fact files --------------------------------------------------------- 3.33s 2026-02-08 02:57:18.666816 | orchestrator | Create custom facts directory ------------------------------------------- 1.28s 2026-02-08 02:57:18.666823 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.24s 2026-02-08 02:57:18.666839 | orchestrator | Copy fact file ---------------------------------------------------------- 1.15s 2026-02-08 02:57:18.914111 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.97s 2026-02-08 02:57:18.914367 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.91s 2026-02-08 02:57:18.914421 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-02-08 02:57:18.914471 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-02-08 02:57:18.914492 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2026-02-08 02:57:18.914510 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2026-02-08 02:57:18.914528 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-02-08 02:57:18.914540 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2026-02-08 02:57:18.914551 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-02-08 02:57:18.914562 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-02-08 02:57:18.914577 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-02-08 02:57:19.232281 | orchestrator | + osism apply bootstrap 2026-02-08 02:57:31.308740 | orchestrator | 2026-02-08 02:57:31 | INFO  | Task 8091fd3b-b518-4237-a9c4-a6ad87356930 (bootstrap) was prepared for execution. 2026-02-08 02:57:31.309488 | orchestrator | 2026-02-08 02:57:31 | INFO  | It takes a moment until task 8091fd3b-b518-4237-a9c4-a6ad87356930 (bootstrap) has been started and output is visible here. 2026-02-08 02:57:47.067448 | orchestrator | 2026-02-08 02:57:47.067546 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-08 02:57:47.067558 | orchestrator | 2026-02-08 02:57:47.067568 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-08 02:57:47.067577 | orchestrator | Sunday 08 February 2026 02:57:35 +0000 (0:00:00.159) 0:00:00.159 ******* 2026-02-08 02:57:47.067586 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:47.067596 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:47.067605 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:47.067614 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:47.067622 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:47.067631 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:47.067640 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:47.067648 | orchestrator | 2026-02-08 02:57:47.067657 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-08 02:57:47.067666 | orchestrator | 2026-02-08 02:57:47.067675 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-08 02:57:47.067684 | orchestrator | Sunday 08 February 2026 02:57:36 +0000 (0:00:00.262) 0:00:00.421 ******* 2026-02-08 02:57:47.067692 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:47.067701 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:47.067709 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:47.067718 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:47.067727 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:47.067735 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:47.067744 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:47.067752 | orchestrator | 2026-02-08 02:57:47.067761 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-08 02:57:47.067769 | orchestrator | 2026-02-08 02:57:47.067778 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-08 02:57:47.067787 | orchestrator | Sunday 08 February 2026 02:57:39 +0000 (0:00:03.413) 0:00:03.835 ******* 2026-02-08 02:57:47.067797 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-08 02:57:47.067806 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-08 02:57:47.067814 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-08 02:57:47.067823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-08 02:57:47.067831 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-08 02:57:47.067840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 02:57:47.067849 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-08 02:57:47.067857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 02:57:47.067866 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-08 02:57:47.067896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 02:57:47.067905 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-08 02:57:47.067914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 02:57:47.067922 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:57:47.067931 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-08 02:57:47.067939 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-08 02:57:47.067948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 02:57:47.067956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 02:57:47.067965 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 02:57:47.067973 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 02:57:47.067982 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:57:47.067990 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-08 02:57:47.068001 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 02:57:47.068011 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 02:57:47.068022 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 02:57:47.068033 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-08 02:57:47.068043 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 02:57:47.068053 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-08 02:57:47.068063 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 02:57:47.068073 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 02:57:47.068084 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 02:57:47.068095 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 02:57:47.068105 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-08 02:57:47.068114 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 02:57:47.068123 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 02:57:47.068131 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 02:57:47.068140 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-08 02:57:47.068148 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 02:57:47.068157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 02:57:47.068165 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 02:57:47.068199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-08 02:57:47.068208 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 02:57:47.068217 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 02:57:47.068226 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:57:47.068234 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:57:47.068243 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 02:57:47.068252 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 02:57:47.068275 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 02:57:47.068284 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 02:57:47.068293 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 02:57:47.068302 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:57:47.068310 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 02:57:47.068318 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 02:57:47.068327 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:57:47.068336 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 02:57:47.068352 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 02:57:47.068360 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:57:47.068369 | orchestrator | 2026-02-08 02:57:47.068377 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-08 02:57:47.068386 | orchestrator | 2026-02-08 02:57:47.068395 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-08 02:57:47.068419 | orchestrator | Sunday 08 February 2026 02:57:40 +0000 (0:00:00.491) 0:00:04.327 ******* 2026-02-08 02:57:47.068428 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:47.068436 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:47.068445 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:47.068453 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:47.068462 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:47.068470 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:47.068479 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:47.068487 | orchestrator | 2026-02-08 02:57:47.068496 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-08 02:57:47.068505 | orchestrator | Sunday 08 February 2026 02:57:41 +0000 (0:00:01.187) 0:00:05.514 ******* 2026-02-08 02:57:47.068513 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:47.068521 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:47.068530 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:47.068538 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:47.068547 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:47.068555 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:47.068563 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:47.068572 | orchestrator | 2026-02-08 02:57:47.068581 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-08 02:57:47.068589 | orchestrator | Sunday 08 February 2026 02:57:42 +0000 (0:00:01.189) 0:00:06.704 ******* 2026-02-08 02:57:47.068599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:57:47.068610 | orchestrator | 2026-02-08 02:57:47.068619 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-08 02:57:47.068627 | orchestrator | Sunday 08 February 2026 02:57:42 +0000 (0:00:00.276) 0:00:06.980 ******* 2026-02-08 02:57:47.068636 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:47.068644 | orchestrator | changed: [testbed-manager] 2026-02-08 02:57:47.068653 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:47.068661 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:57:47.068670 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:47.068678 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:57:47.068686 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:57:47.068695 | orchestrator | 2026-02-08 02:57:47.068703 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-08 02:57:47.068712 | orchestrator | Sunday 08 February 2026 02:57:44 +0000 (0:00:02.041) 0:00:09.022 ******* 2026-02-08 02:57:47.068720 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:57:47.068730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-5, testbed-node-4, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:57:47.068741 | orchestrator | 2026-02-08 02:57:47.068749 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-08 02:57:47.068758 | orchestrator | Sunday 08 February 2026 02:57:44 +0000 (0:00:00.272) 0:00:09.294 ******* 2026-02-08 02:57:47.068769 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:47.068783 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:47.068797 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:57:47.068812 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:57:47.068827 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:57:47.068841 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:47.068864 | orchestrator | 2026-02-08 02:57:47.068884 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-08 02:57:47.068900 | orchestrator | Sunday 08 February 2026 02:57:45 +0000 (0:00:00.933) 0:00:10.228 ******* 2026-02-08 02:57:47.068914 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:57:47.068927 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:47.068941 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:57:47.068956 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:57:47.068972 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:47.068990 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:57:47.069009 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:47.069027 | orchestrator | 2026-02-08 02:57:47.069046 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-08 02:57:47.069064 | orchestrator | Sunday 08 February 2026 02:57:46 +0000 (0:00:00.543) 0:00:10.771 ******* 2026-02-08 02:57:47.069076 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:57:47.069086 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:57:47.069097 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:57:47.069107 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:57:47.069118 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:57:47.069128 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:57:47.069139 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:47.069149 | orchestrator | 2026-02-08 02:57:47.069160 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-08 02:57:47.069233 | orchestrator | Sunday 08 February 2026 02:57:46 +0000 (0:00:00.445) 0:00:11.217 ******* 2026-02-08 02:57:47.069248 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:57:47.069259 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:57:47.069281 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:57:59.221363 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:57:59.221482 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:57:59.221509 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:57:59.221529 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:57:59.221548 | orchestrator | 2026-02-08 02:57:59.221568 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-08 02:57:59.221590 | orchestrator | Sunday 08 February 2026 02:57:47 +0000 (0:00:00.249) 0:00:11.467 ******* 2026-02-08 02:57:59.221612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:57:59.221654 | orchestrator | 2026-02-08 02:57:59.221674 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-08 02:57:59.221693 | orchestrator | Sunday 08 February 2026 02:57:47 +0000 (0:00:00.311) 0:00:11.778 ******* 2026-02-08 02:57:59.221713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:57:59.221734 | orchestrator | 2026-02-08 02:57:59.221754 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-08 02:57:59.221774 | orchestrator | Sunday 08 February 2026 02:57:47 +0000 (0:00:00.324) 0:00:12.102 ******* 2026-02-08 02:57:59.221794 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:59.221814 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.221833 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:59.221853 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:59.221872 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.221892 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.221910 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.221927 | orchestrator | 2026-02-08 02:57:59.221946 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-08 02:57:59.221965 | orchestrator | Sunday 08 February 2026 02:57:49 +0000 (0:00:01.296) 0:00:13.399 ******* 2026-02-08 02:57:59.222085 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:57:59.222155 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:57:59.222200 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:57:59.222220 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:57:59.222240 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:57:59.222258 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:57:59.222275 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:57:59.222295 | orchestrator | 2026-02-08 02:57:59.222314 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-08 02:57:59.222332 | orchestrator | Sunday 08 February 2026 02:57:49 +0000 (0:00:00.346) 0:00:13.746 ******* 2026-02-08 02:57:59.222351 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.222371 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.222390 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.222409 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.222426 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:59.222445 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:59.222464 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:59.222481 | orchestrator | 2026-02-08 02:57:59.222501 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-08 02:57:59.222519 | orchestrator | Sunday 08 February 2026 02:57:49 +0000 (0:00:00.507) 0:00:14.254 ******* 2026-02-08 02:57:59.222570 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:57:59.222587 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:57:59.222602 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:57:59.222617 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:57:59.222633 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:57:59.222651 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:57:59.222670 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:57:59.222689 | orchestrator | 2026-02-08 02:57:59.222708 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-08 02:57:59.222728 | orchestrator | Sunday 08 February 2026 02:57:50 +0000 (0:00:00.256) 0:00:14.510 ******* 2026-02-08 02:57:59.222747 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:59.222765 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.222781 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:59.222798 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:59.222815 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:57:59.222833 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:57:59.222866 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:57:59.222883 | orchestrator | 2026-02-08 02:57:59.222901 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-08 02:57:59.222919 | orchestrator | Sunday 08 February 2026 02:57:50 +0000 (0:00:00.517) 0:00:15.028 ******* 2026-02-08 02:57:59.222936 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:59.222952 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.222969 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:59.222985 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:59.223001 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:57:59.223019 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:57:59.223036 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:57:59.223053 | orchestrator | 2026-02-08 02:57:59.223070 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-08 02:57:59.223089 | orchestrator | Sunday 08 February 2026 02:57:51 +0000 (0:00:01.078) 0:00:16.107 ******* 2026-02-08 02:57:59.223107 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:59.223126 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:59.223145 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.223163 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.223212 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.223231 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:59.223249 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.223266 | orchestrator | 2026-02-08 02:57:59.223285 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-08 02:57:59.223324 | orchestrator | Sunday 08 February 2026 02:57:52 +0000 (0:00:01.020) 0:00:17.128 ******* 2026-02-08 02:57:59.223374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:57:59.223397 | orchestrator | 2026-02-08 02:57:59.223416 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-08 02:57:59.223434 | orchestrator | Sunday 08 February 2026 02:57:53 +0000 (0:00:00.351) 0:00:17.480 ******* 2026-02-08 02:57:59.223451 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:57:59.223469 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:57:59.223488 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:57:59.223507 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:57:59.223525 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:57:59.223544 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:57:59.223562 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:57:59.223580 | orchestrator | 2026-02-08 02:57:59.223598 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-08 02:57:59.223616 | orchestrator | Sunday 08 February 2026 02:57:54 +0000 (0:00:01.252) 0:00:18.732 ******* 2026-02-08 02:57:59.223636 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.223654 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.223672 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.223689 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.223700 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:59.223711 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:59.223722 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:59.223733 | orchestrator | 2026-02-08 02:57:59.223744 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-08 02:57:59.223754 | orchestrator | Sunday 08 February 2026 02:57:54 +0000 (0:00:00.257) 0:00:18.990 ******* 2026-02-08 02:57:59.223765 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.223776 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.223786 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.223797 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.223808 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:59.223818 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:59.223829 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:59.223839 | orchestrator | 2026-02-08 02:57:59.223850 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-08 02:57:59.223861 | orchestrator | Sunday 08 February 2026 02:57:54 +0000 (0:00:00.295) 0:00:19.286 ******* 2026-02-08 02:57:59.223872 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.223883 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.223893 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.223904 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.223914 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:59.223925 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:59.223935 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:59.223946 | orchestrator | 2026-02-08 02:57:59.223957 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-08 02:57:59.223967 | orchestrator | Sunday 08 February 2026 02:57:55 +0000 (0:00:00.240) 0:00:19.527 ******* 2026-02-08 02:57:59.223979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:57:59.223992 | orchestrator | 2026-02-08 02:57:59.224003 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-08 02:57:59.224013 | orchestrator | Sunday 08 February 2026 02:57:55 +0000 (0:00:00.310) 0:00:19.837 ******* 2026-02-08 02:57:59.224024 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.224035 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.224058 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.224069 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.224080 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:59.224090 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:59.224101 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:59.224112 | orchestrator | 2026-02-08 02:57:59.224123 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-08 02:57:59.224133 | orchestrator | Sunday 08 February 2026 02:57:56 +0000 (0:00:00.626) 0:00:20.464 ******* 2026-02-08 02:57:59.224144 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:57:59.224155 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:57:59.224166 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:57:59.224197 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:57:59.224208 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:57:59.224217 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:57:59.224226 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:57:59.224236 | orchestrator | 2026-02-08 02:57:59.224246 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-08 02:57:59.224256 | orchestrator | Sunday 08 February 2026 02:57:56 +0000 (0:00:00.261) 0:00:20.725 ******* 2026-02-08 02:57:59.224265 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.224275 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.224284 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.224294 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.224303 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:57:59.224313 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:57:59.224322 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:57:59.224331 | orchestrator | 2026-02-08 02:57:59.224341 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-08 02:57:59.224351 | orchestrator | Sunday 08 February 2026 02:57:57 +0000 (0:00:01.077) 0:00:21.803 ******* 2026-02-08 02:57:59.224360 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.224370 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.224379 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.224389 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.224398 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:57:59.224408 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:57:59.224417 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:57:59.224426 | orchestrator | 2026-02-08 02:57:59.224436 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-08 02:57:59.224446 | orchestrator | Sunday 08 February 2026 02:57:58 +0000 (0:00:00.577) 0:00:22.380 ******* 2026-02-08 02:57:59.224456 | orchestrator | ok: [testbed-manager] 2026-02-08 02:57:59.224465 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:57:59.224475 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:57:59.224484 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:57:59.224504 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:58:39.284176 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:58:39.284336 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:58:39.284349 | orchestrator | 2026-02-08 02:58:39.284359 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-08 02:58:39.284385 | orchestrator | Sunday 08 February 2026 02:57:59 +0000 (0:00:01.144) 0:00:23.524 ******* 2026-02-08 02:58:39.284393 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.284402 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.284436 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.284449 | orchestrator | changed: [testbed-manager] 2026-02-08 02:58:39.284463 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:58:39.284475 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:58:39.284488 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:58:39.284500 | orchestrator | 2026-02-08 02:58:39.284513 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-08 02:58:39.284526 | orchestrator | Sunday 08 February 2026 02:58:14 +0000 (0:00:15.180) 0:00:38.705 ******* 2026-02-08 02:58:39.284538 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.284604 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.284613 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.284620 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.284641 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.284648 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.284655 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.284662 | orchestrator | 2026-02-08 02:58:39.284670 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-08 02:58:39.284677 | orchestrator | Sunday 08 February 2026 02:58:14 +0000 (0:00:00.259) 0:00:38.964 ******* 2026-02-08 02:58:39.284684 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.284691 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.284699 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.284706 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.284713 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.284720 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.284727 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.284736 | orchestrator | 2026-02-08 02:58:39.284744 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-08 02:58:39.284752 | orchestrator | Sunday 08 February 2026 02:58:14 +0000 (0:00:00.253) 0:00:39.217 ******* 2026-02-08 02:58:39.284761 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.284769 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.284777 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.284785 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.284794 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.284803 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.284812 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.284820 | orchestrator | 2026-02-08 02:58:39.284829 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-08 02:58:39.284837 | orchestrator | Sunday 08 February 2026 02:58:15 +0000 (0:00:00.263) 0:00:39.481 ******* 2026-02-08 02:58:39.284848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:58:39.284859 | orchestrator | 2026-02-08 02:58:39.284867 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-08 02:58:39.284876 | orchestrator | Sunday 08 February 2026 02:58:15 +0000 (0:00:00.339) 0:00:39.820 ******* 2026-02-08 02:58:39.284884 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.284893 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.284901 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.284909 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.284917 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.284925 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.284932 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.284939 | orchestrator | 2026-02-08 02:58:39.284947 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-08 02:58:39.284954 | orchestrator | Sunday 08 February 2026 02:58:16 +0000 (0:00:01.436) 0:00:41.256 ******* 2026-02-08 02:58:39.284961 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:58:39.284968 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:58:39.284975 | orchestrator | changed: [testbed-manager] 2026-02-08 02:58:39.284982 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:58:39.285014 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:58:39.285022 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:58:39.285029 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:58:39.285050 | orchestrator | 2026-02-08 02:58:39.285068 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-08 02:58:39.285082 | orchestrator | Sunday 08 February 2026 02:58:17 +0000 (0:00:01.022) 0:00:42.278 ******* 2026-02-08 02:58:39.285089 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.285096 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.285103 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.285117 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.285124 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.285131 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.285138 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.285145 | orchestrator | 2026-02-08 02:58:39.285152 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-08 02:58:39.285160 | orchestrator | Sunday 08 February 2026 02:58:18 +0000 (0:00:00.755) 0:00:43.034 ******* 2026-02-08 02:58:39.285167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:58:39.285177 | orchestrator | 2026-02-08 02:58:39.285184 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-08 02:58:39.285224 | orchestrator | Sunday 08 February 2026 02:58:19 +0000 (0:00:00.308) 0:00:43.343 ******* 2026-02-08 02:58:39.285233 | orchestrator | changed: [testbed-manager] 2026-02-08 02:58:39.285240 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:58:39.285248 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:58:39.285255 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:58:39.285262 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:58:39.285269 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:58:39.285276 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:58:39.285283 | orchestrator | 2026-02-08 02:58:39.285306 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-08 02:58:39.285313 | orchestrator | Sunday 08 February 2026 02:58:20 +0000 (0:00:00.994) 0:00:44.338 ******* 2026-02-08 02:58:39.285320 | orchestrator | skipping: [testbed-manager] 2026-02-08 02:58:39.285328 | orchestrator | skipping: [testbed-node-3] 2026-02-08 02:58:39.285335 | orchestrator | skipping: [testbed-node-4] 2026-02-08 02:58:39.285342 | orchestrator | skipping: [testbed-node-5] 2026-02-08 02:58:39.285349 | orchestrator | skipping: [testbed-node-0] 2026-02-08 02:58:39.285356 | orchestrator | skipping: [testbed-node-1] 2026-02-08 02:58:39.285363 | orchestrator | skipping: [testbed-node-2] 2026-02-08 02:58:39.285370 | orchestrator | 2026-02-08 02:58:39.285388 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-08 02:58:39.285410 | orchestrator | Sunday 08 February 2026 02:58:20 +0000 (0:00:00.256) 0:00:44.594 ******* 2026-02-08 02:58:39.285418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:58:39.285425 | orchestrator | 2026-02-08 02:58:39.285433 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-08 02:58:39.285440 | orchestrator | Sunday 08 February 2026 02:58:20 +0000 (0:00:00.315) 0:00:44.910 ******* 2026-02-08 02:58:39.285447 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.285454 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.285461 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.285468 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.285476 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.285483 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.285490 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.285497 | orchestrator | 2026-02-08 02:58:39.285504 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-08 02:58:39.285533 | orchestrator | Sunday 08 February 2026 02:58:22 +0000 (0:00:01.617) 0:00:46.527 ******* 2026-02-08 02:58:39.285558 | orchestrator | changed: [testbed-manager] 2026-02-08 02:58:39.285566 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:58:39.285573 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:58:39.285601 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:58:39.285608 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:58:39.285616 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:58:39.285623 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:58:39.285636 | orchestrator | 2026-02-08 02:58:39.285644 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-08 02:58:39.285651 | orchestrator | Sunday 08 February 2026 02:58:23 +0000 (0:00:01.105) 0:00:47.633 ******* 2026-02-08 02:58:39.285658 | orchestrator | changed: [testbed-node-5] 2026-02-08 02:58:39.285666 | orchestrator | changed: [testbed-node-3] 2026-02-08 02:58:39.285673 | orchestrator | changed: [testbed-node-4] 2026-02-08 02:58:39.285680 | orchestrator | changed: [testbed-node-0] 2026-02-08 02:58:39.285687 | orchestrator | changed: [testbed-node-2] 2026-02-08 02:58:39.285694 | orchestrator | changed: [testbed-node-1] 2026-02-08 02:58:39.285701 | orchestrator | changed: [testbed-manager] 2026-02-08 02:58:39.285708 | orchestrator | 2026-02-08 02:58:39.285715 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-08 02:58:39.285723 | orchestrator | Sunday 08 February 2026 02:58:36 +0000 (0:00:12.743) 0:01:00.376 ******* 2026-02-08 02:58:39.285730 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.285737 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.285744 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.285751 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.285758 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.285765 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.285773 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.285780 | orchestrator | 2026-02-08 02:58:39.285787 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-08 02:58:39.285794 | orchestrator | Sunday 08 February 2026 02:58:37 +0000 (0:00:01.597) 0:01:01.974 ******* 2026-02-08 02:58:39.285802 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.285809 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.285816 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.285823 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.285830 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.285837 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.285844 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.285851 | orchestrator | 2026-02-08 02:58:39.285858 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-08 02:58:39.285865 | orchestrator | Sunday 08 February 2026 02:58:38 +0000 (0:00:00.851) 0:01:02.825 ******* 2026-02-08 02:58:39.285877 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.285885 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.285892 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.285899 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.285906 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.285913 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.285920 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.285927 | orchestrator | 2026-02-08 02:58:39.285934 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-08 02:58:39.285941 | orchestrator | Sunday 08 February 2026 02:58:38 +0000 (0:00:00.216) 0:01:03.041 ******* 2026-02-08 02:58:39.285948 | orchestrator | ok: [testbed-manager] 2026-02-08 02:58:39.285955 | orchestrator | ok: [testbed-node-3] 2026-02-08 02:58:39.285962 | orchestrator | ok: [testbed-node-4] 2026-02-08 02:58:39.285969 | orchestrator | ok: [testbed-node-5] 2026-02-08 02:58:39.285977 | orchestrator | ok: [testbed-node-0] 2026-02-08 02:58:39.285984 | orchestrator | ok: [testbed-node-1] 2026-02-08 02:58:39.285991 | orchestrator | ok: [testbed-node-2] 2026-02-08 02:58:39.285998 | orchestrator | 2026-02-08 02:58:39.286005 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-08 02:58:39.286012 | orchestrator | Sunday 08 February 2026 02:58:38 +0000 (0:00:00.238) 0:01:03.279 ******* 2026-02-08 02:58:39.286114 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 02:58:39.286128 | orchestrator | 2026-02-08 02:58:39.286149 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-08 03:01:08.578617 | orchestrator | Sunday 08 February 2026 02:58:39 +0000 (0:00:00.307) 0:01:03.587 ******* 2026-02-08 03:01:08.578764 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:08.578797 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:08.578816 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:08.578835 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:08.578853 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:08.578872 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:08.578889 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:08.578908 | orchestrator | 2026-02-08 03:01:08.578928 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-08 03:01:08.578947 | orchestrator | Sunday 08 February 2026 02:58:40 +0000 (0:00:01.514) 0:01:05.101 ******* 2026-02-08 03:01:08.578967 | orchestrator | changed: [testbed-manager] 2026-02-08 03:01:08.578986 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:01:08.579004 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:01:08.579022 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:01:08.579040 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:01:08.579058 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:01:08.579074 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:01:08.579111 | orchestrator | 2026-02-08 03:01:08.579131 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-08 03:01:08.579153 | orchestrator | Sunday 08 February 2026 02:58:41 +0000 (0:00:00.543) 0:01:05.645 ******* 2026-02-08 03:01:08.579170 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:08.579189 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:08.579238 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:08.579260 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:08.579277 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:08.579294 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:08.579313 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:08.579331 | orchestrator | 2026-02-08 03:01:08.579352 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-08 03:01:08.579372 | orchestrator | Sunday 08 February 2026 02:58:41 +0000 (0:00:00.239) 0:01:05.884 ******* 2026-02-08 03:01:08.579392 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:08.579412 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:08.579431 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:08.579448 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:08.579465 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:08.579483 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:08.579501 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:08.579519 | orchestrator | 2026-02-08 03:01:08.579535 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-08 03:01:08.579553 | orchestrator | Sunday 08 February 2026 02:58:42 +0000 (0:00:01.087) 0:01:06.972 ******* 2026-02-08 03:01:08.579571 | orchestrator | changed: [testbed-manager] 2026-02-08 03:01:08.579589 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:01:08.579608 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:01:08.579626 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:01:08.579641 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:01:08.579657 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:01:08.579674 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:01:08.579691 | orchestrator | 2026-02-08 03:01:08.579713 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-08 03:01:08.579731 | orchestrator | Sunday 08 February 2026 02:58:44 +0000 (0:00:01.597) 0:01:08.570 ******* 2026-02-08 03:01:08.579748 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:08.579765 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:08.579781 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:08.579800 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:08.579816 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:08.579832 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:08.579849 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:08.579865 | orchestrator | 2026-02-08 03:01:08.579882 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-08 03:01:08.579937 | orchestrator | Sunday 08 February 2026 02:58:46 +0000 (0:00:02.436) 0:01:11.007 ******* 2026-02-08 03:01:08.579957 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:08.579973 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:08.579990 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:08.580008 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:08.580026 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:08.580045 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:08.580064 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:08.580083 | orchestrator | 2026-02-08 03:01:08.580101 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-08 03:01:08.580120 | orchestrator | Sunday 08 February 2026 02:59:31 +0000 (0:00:45.189) 0:01:56.197 ******* 2026-02-08 03:01:08.580137 | orchestrator | changed: [testbed-manager] 2026-02-08 03:01:08.580155 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:01:08.580173 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:01:08.580191 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:01:08.580235 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:01:08.580254 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:01:08.580272 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:01:08.580292 | orchestrator | 2026-02-08 03:01:08.580311 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-08 03:01:08.580328 | orchestrator | Sunday 08 February 2026 03:00:52 +0000 (0:01:20.571) 0:03:16.768 ******* 2026-02-08 03:01:08.580344 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:08.580362 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:08.580378 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:08.580395 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:08.580413 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:08.580430 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:08.580449 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:08.580466 | orchestrator | 2026-02-08 03:01:08.580483 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-08 03:01:08.580499 | orchestrator | Sunday 08 February 2026 03:00:54 +0000 (0:00:01.586) 0:03:18.355 ******* 2026-02-08 03:01:08.580517 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:08.580535 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:08.580551 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:08.580566 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:08.580583 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:08.580601 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:08.580619 | orchestrator | changed: [testbed-manager] 2026-02-08 03:01:08.580637 | orchestrator | 2026-02-08 03:01:08.580654 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-08 03:01:08.580671 | orchestrator | Sunday 08 February 2026 03:01:07 +0000 (0:00:13.280) 0:03:31.635 ******* 2026-02-08 03:01:08.580743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-08 03:01:08.580781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-08 03:01:08.580843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-08 03:01:08.580866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-08 03:01:08.580885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-08 03:01:08.580904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-08 03:01:08.580925 | orchestrator | 2026-02-08 03:01:08.580945 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-08 03:01:08.580982 | orchestrator | Sunday 08 February 2026 03:01:07 +0000 (0:00:00.436) 0:03:32.072 ******* 2026-02-08 03:01:08.581004 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-08 03:01:08.581022 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:01:08.581040 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-08 03:01:08.581059 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:01:08.581079 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-08 03:01:08.581105 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-08 03:01:08.581125 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:01:08.581144 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:01:08.581163 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-08 03:01:08.581182 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-08 03:01:08.581202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-08 03:01:08.581252 | orchestrator | 2026-02-08 03:01:08.581271 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-08 03:01:08.581289 | orchestrator | Sunday 08 February 2026 03:01:08 +0000 (0:00:00.705) 0:03:32.778 ******* 2026-02-08 03:01:08.581308 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-08 03:01:08.581329 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-08 03:01:08.581348 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-08 03:01:08.581365 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-08 03:01:08.581383 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-08 03:01:08.581417 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-08 03:01:15.068437 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-08 03:01:15.068532 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-08 03:01:15.068563 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-08 03:01:15.068572 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-08 03:01:15.068580 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-08 03:01:15.068588 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-08 03:01:15.068608 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-08 03:01:15.068624 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-08 03:01:15.068633 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:01:15.068642 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-08 03:01:15.068649 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-08 03:01:15.068656 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-08 03:01:15.068664 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-08 03:01:15.068671 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-08 03:01:15.068678 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-08 03:01:15.068686 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-08 03:01:15.068693 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:01:15.068700 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-08 03:01:15.068708 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-08 03:01:15.068715 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-08 03:01:15.068722 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-08 03:01:15.068729 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-08 03:01:15.068737 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-08 03:01:15.068744 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-08 03:01:15.068751 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-08 03:01:15.068758 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-08 03:01:15.068765 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-08 03:01:15.068773 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-08 03:01:15.068780 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-08 03:01:15.068787 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-08 03:01:15.068807 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-08 03:01:15.068815 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-08 03:01:15.068822 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-08 03:01:15.068829 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-08 03:01:15.068836 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:01:15.068844 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-08 03:01:15.068857 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-08 03:01:15.068865 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:01:15.068876 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-08 03:01:15.068888 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-08 03:01:15.068899 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-08 03:01:15.068910 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-08 03:01:15.068922 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-08 03:01:15.068951 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-08 03:01:15.068964 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-08 03:01:15.068975 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-08 03:01:15.068988 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-08 03:01:15.069000 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-08 03:01:15.069011 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-08 03:01:15.069023 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-08 03:01:15.069035 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-08 03:01:15.069047 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-08 03:01:15.069060 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-08 03:01:15.069073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-08 03:01:15.069086 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-08 03:01:15.069100 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-08 03:01:15.069112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-08 03:01:15.069125 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-08 03:01:15.069137 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-08 03:01:15.069149 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-08 03:01:15.069161 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-08 03:01:15.069173 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-08 03:01:15.069184 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-08 03:01:15.069196 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-08 03:01:15.069232 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-08 03:01:15.069245 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-08 03:01:15.069257 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-08 03:01:15.069270 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-08 03:01:15.069292 | orchestrator | 2026-02-08 03:01:15.069301 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-08 03:01:15.069308 | orchestrator | Sunday 08 February 2026 03:01:12 +0000 (0:00:04.530) 0:03:37.309 ******* 2026-02-08 03:01:15.069316 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-08 03:01:15.069323 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-08 03:01:15.069330 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-08 03:01:15.069337 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-08 03:01:15.069352 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-08 03:01:15.069359 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-08 03:01:15.069366 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-08 03:01:15.069373 | orchestrator | 2026-02-08 03:01:15.069381 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-08 03:01:15.069388 | orchestrator | Sunday 08 February 2026 03:01:13 +0000 (0:00:00.570) 0:03:37.879 ******* 2026-02-08 03:01:15.069395 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-08 03:01:15.069402 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:01:15.069409 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-08 03:01:15.069416 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-08 03:01:15.069424 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:01:15.069431 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:01:15.069438 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-08 03:01:15.069445 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:01:15.069453 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-08 03:01:15.069460 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-08 03:01:15.069476 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-08 03:01:28.588630 | orchestrator | 2026-02-08 03:01:28.588727 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-08 03:01:28.588740 | orchestrator | Sunday 08 February 2026 03:01:15 +0000 (0:00:01.494) 0:03:39.374 ******* 2026-02-08 03:01:28.588749 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-08 03:01:28.588759 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-08 03:01:28.588767 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:01:28.588777 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-08 03:01:28.588798 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:01:28.588815 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:01:28.588823 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-08 03:01:28.588831 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:01:28.588840 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-08 03:01:28.588848 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-08 03:01:28.588856 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-08 03:01:28.588864 | orchestrator | 2026-02-08 03:01:28.588872 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-08 03:01:28.588901 | orchestrator | Sunday 08 February 2026 03:01:15 +0000 (0:00:00.597) 0:03:39.971 ******* 2026-02-08 03:01:28.588910 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-08 03:01:28.588918 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:01:28.588926 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-08 03:01:28.588934 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:01:28.588942 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-08 03:01:28.588950 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:01:28.588958 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-08 03:01:28.588966 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:01:28.588974 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-08 03:01:28.588982 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-08 03:01:28.588990 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-08 03:01:28.588998 | orchestrator | 2026-02-08 03:01:28.589007 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-08 03:01:28.589014 | orchestrator | Sunday 08 February 2026 03:01:16 +0000 (0:00:00.602) 0:03:40.574 ******* 2026-02-08 03:01:28.589022 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:01:28.589030 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:01:28.589038 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:01:28.589046 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:01:28.589054 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:01:28.589062 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:01:28.589070 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:01:28.589077 | orchestrator | 2026-02-08 03:01:28.589085 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-08 03:01:28.589093 | orchestrator | Sunday 08 February 2026 03:01:16 +0000 (0:00:00.308) 0:03:40.882 ******* 2026-02-08 03:01:28.589101 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:28.589110 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:28.589118 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:28.589127 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:28.589134 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:28.589142 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:28.589150 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:28.589158 | orchestrator | 2026-02-08 03:01:28.589166 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-08 03:01:28.589174 | orchestrator | Sunday 08 February 2026 03:01:21 +0000 (0:00:05.374) 0:03:46.257 ******* 2026-02-08 03:01:28.589182 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-08 03:01:28.589190 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-08 03:01:28.589198 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:01:28.589239 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-08 03:01:28.589248 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:01:28.589256 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-08 03:01:28.589264 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:01:28.589272 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-08 03:01:28.589280 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:01:28.589288 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:01:28.589296 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-08 03:01:28.589304 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:01:28.589312 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-08 03:01:28.589320 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:01:28.589328 | orchestrator | 2026-02-08 03:01:28.589342 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-08 03:01:28.589355 | orchestrator | Sunday 08 February 2026 03:01:22 +0000 (0:00:00.304) 0:03:46.562 ******* 2026-02-08 03:01:28.589452 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-08 03:01:28.589468 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-08 03:01:28.589482 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-08 03:01:28.589514 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-08 03:01:28.589528 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-08 03:01:28.589540 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-08 03:01:28.589549 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-08 03:01:28.589556 | orchestrator | 2026-02-08 03:01:28.589565 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-08 03:01:28.589573 | orchestrator | Sunday 08 February 2026 03:01:24 +0000 (0:00:01.899) 0:03:48.461 ******* 2026-02-08 03:01:28.589582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:01:28.589593 | orchestrator | 2026-02-08 03:01:28.589601 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-08 03:01:28.589609 | orchestrator | Sunday 08 February 2026 03:01:24 +0000 (0:00:00.365) 0:03:48.827 ******* 2026-02-08 03:01:28.589617 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:28.589625 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:28.589633 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:28.589640 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:28.589648 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:28.589656 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:28.589664 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:28.589672 | orchestrator | 2026-02-08 03:01:28.589680 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-08 03:01:28.589688 | orchestrator | Sunday 08 February 2026 03:01:25 +0000 (0:00:01.131) 0:03:49.958 ******* 2026-02-08 03:01:28.589695 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:28.589703 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:28.589711 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:28.589719 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:28.589726 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:28.589734 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:28.589742 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:28.589750 | orchestrator | 2026-02-08 03:01:28.589757 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-08 03:01:28.589765 | orchestrator | Sunday 08 February 2026 03:01:26 +0000 (0:00:00.590) 0:03:50.549 ******* 2026-02-08 03:01:28.589773 | orchestrator | changed: [testbed-manager] 2026-02-08 03:01:28.589781 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:01:28.589789 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:01:28.589797 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:01:28.589805 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:01:28.589813 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:01:28.589820 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:01:28.589828 | orchestrator | 2026-02-08 03:01:28.589836 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-08 03:01:28.589844 | orchestrator | Sunday 08 February 2026 03:01:27 +0000 (0:00:00.773) 0:03:51.322 ******* 2026-02-08 03:01:28.589852 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:28.589860 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:28.589868 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:28.589875 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:28.589883 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:28.589891 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:28.589899 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:28.589906 | orchestrator | 2026-02-08 03:01:28.589914 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-08 03:01:28.589930 | orchestrator | Sunday 08 February 2026 03:01:27 +0000 (0:00:00.649) 0:03:51.972 ******* 2026-02-08 03:01:28.589946 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770518077.3124635, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:28.589957 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770518086.5090709, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:28.589966 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770518091.299993, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:28.589992 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770518096.7182438, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343517 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770518100.6701138, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343636 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770518089.5068717, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343667 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770518097.3673742, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343724 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343754 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343766 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343778 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343818 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343830 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343842 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 03:01:33.343863 | orchestrator | 2026-02-08 03:01:33.343877 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-08 03:01:33.343889 | orchestrator | Sunday 08 February 2026 03:01:28 +0000 (0:00:00.918) 0:03:52.891 ******* 2026-02-08 03:01:33.343901 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:01:33.343912 | orchestrator | changed: [testbed-manager] 2026-02-08 03:01:33.343923 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:01:33.343934 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:01:33.343945 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:01:33.343956 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:01:33.343967 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:01:33.343977 | orchestrator | 2026-02-08 03:01:33.343988 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-08 03:01:33.343999 | orchestrator | Sunday 08 February 2026 03:01:29 +0000 (0:00:01.058) 0:03:53.949 ******* 2026-02-08 03:01:33.344010 | orchestrator | changed: [testbed-manager] 2026-02-08 03:01:33.344020 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:01:33.344031 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:01:33.344042 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:01:33.344055 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:01:33.344073 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:01:33.344092 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:01:33.344108 | orchestrator | 2026-02-08 03:01:33.344133 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-08 03:01:33.344151 | orchestrator | Sunday 08 February 2026 03:01:30 +0000 (0:00:01.104) 0:03:55.054 ******* 2026-02-08 03:01:33.344170 | orchestrator | changed: [testbed-manager] 2026-02-08 03:01:33.344188 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:01:33.344236 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:01:33.344256 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:01:33.344277 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:01:33.344298 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:01:33.344317 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:01:33.344335 | orchestrator | 2026-02-08 03:01:33.344355 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-08 03:01:33.344377 | orchestrator | Sunday 08 February 2026 03:01:31 +0000 (0:00:01.140) 0:03:56.194 ******* 2026-02-08 03:01:33.344396 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:01:33.344414 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:01:33.344426 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:01:33.344436 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:01:33.344447 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:01:33.344458 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:01:33.344468 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:01:33.344479 | orchestrator | 2026-02-08 03:01:33.344490 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-08 03:01:33.344500 | orchestrator | Sunday 08 February 2026 03:01:32 +0000 (0:00:00.262) 0:03:56.457 ******* 2026-02-08 03:01:33.344511 | orchestrator | ok: [testbed-manager] 2026-02-08 03:01:33.344599 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:01:33.344617 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:01:33.344635 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:01:33.344653 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:01:33.344671 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:01:33.344689 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:01:33.344707 | orchestrator | 2026-02-08 03:01:33.344726 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-08 03:01:33.344744 | orchestrator | Sunday 08 February 2026 03:01:32 +0000 (0:00:00.777) 0:03:57.234 ******* 2026-02-08 03:01:33.344765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:01:33.344803 | orchestrator | 2026-02-08 03:01:33.344822 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-08 03:01:33.344850 | orchestrator | Sunday 08 February 2026 03:01:33 +0000 (0:00:00.415) 0:03:57.649 ******* 2026-02-08 03:02:45.371014 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:45.371135 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:02:45.371155 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:02:45.371170 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:02:45.371187 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:02:45.371203 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:02:45.371249 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:02:45.371267 | orchestrator | 2026-02-08 03:02:45.371285 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-08 03:02:45.371302 | orchestrator | Sunday 08 February 2026 03:01:40 +0000 (0:00:07.253) 0:04:04.903 ******* 2026-02-08 03:02:45.371319 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:45.371335 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:45.371352 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:45.371368 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:45.371384 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:45.371400 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:45.371417 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:45.371433 | orchestrator | 2026-02-08 03:02:45.371449 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-08 03:02:45.371465 | orchestrator | Sunday 08 February 2026 03:01:41 +0000 (0:00:01.159) 0:04:06.062 ******* 2026-02-08 03:02:45.371482 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:45.371498 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:45.371515 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:45.371531 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:45.371546 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:45.371561 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:45.371577 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:45.371593 | orchestrator | 2026-02-08 03:02:45.371609 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-08 03:02:45.371625 | orchestrator | Sunday 08 February 2026 03:01:42 +0000 (0:00:01.081) 0:04:07.143 ******* 2026-02-08 03:02:45.371641 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:45.371656 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:45.371671 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:45.371686 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:45.371702 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:45.371719 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:45.371736 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:45.371753 | orchestrator | 2026-02-08 03:02:45.371769 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-08 03:02:45.371786 | orchestrator | Sunday 08 February 2026 03:01:43 +0000 (0:00:00.296) 0:04:07.440 ******* 2026-02-08 03:02:45.371802 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:45.371817 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:45.371832 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:45.371846 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:45.371860 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:45.371875 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:45.371890 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:45.371905 | orchestrator | 2026-02-08 03:02:45.371920 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-08 03:02:45.371935 | orchestrator | Sunday 08 February 2026 03:01:43 +0000 (0:00:00.325) 0:04:07.765 ******* 2026-02-08 03:02:45.371950 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:45.371966 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:45.371981 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:45.372019 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:45.372034 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:45.372049 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:45.372064 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:45.372081 | orchestrator | 2026-02-08 03:02:45.372096 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-08 03:02:45.372195 | orchestrator | Sunday 08 February 2026 03:01:43 +0000 (0:00:00.297) 0:04:08.063 ******* 2026-02-08 03:02:45.372275 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:45.372293 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:45.372308 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:45.372324 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:45.372339 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:45.372355 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:45.372370 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:45.372385 | orchestrator | 2026-02-08 03:02:45.372400 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-08 03:02:45.372417 | orchestrator | Sunday 08 February 2026 03:01:49 +0000 (0:00:05.266) 0:04:13.330 ******* 2026-02-08 03:02:45.372435 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:02:45.372453 | orchestrator | 2026-02-08 03:02:45.372470 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-08 03:02:45.372485 | orchestrator | Sunday 08 February 2026 03:01:49 +0000 (0:00:00.388) 0:04:13.718 ******* 2026-02-08 03:02:45.372523 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-08 03:02:45.372539 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-08 03:02:45.372554 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-08 03:02:45.372570 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-08 03:02:45.372586 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:02:45.372602 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-08 03:02:45.372617 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-08 03:02:45.372633 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:02:45.372649 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-08 03:02:45.372665 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-08 03:02:45.372680 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:02:45.372695 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:02:45.372711 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-08 03:02:45.372742 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-08 03:02:45.372758 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-08 03:02:45.372774 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:02:45.372813 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-08 03:02:45.372828 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:02:45.372842 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-08 03:02:45.372857 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-08 03:02:45.372870 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:02:45.372885 | orchestrator | 2026-02-08 03:02:45.372899 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-08 03:02:45.372913 | orchestrator | Sunday 08 February 2026 03:01:49 +0000 (0:00:00.335) 0:04:14.054 ******* 2026-02-08 03:02:45.372927 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:02:45.372941 | orchestrator | 2026-02-08 03:02:45.372955 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-08 03:02:45.372981 | orchestrator | Sunday 08 February 2026 03:01:50 +0000 (0:00:00.404) 0:04:14.459 ******* 2026-02-08 03:02:45.372994 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-08 03:02:45.373008 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:02:45.373020 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-08 03:02:45.373033 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-08 03:02:45.373047 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:02:45.373061 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-08 03:02:45.373076 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:02:45.373090 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-08 03:02:45.373135 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:02:45.373152 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-08 03:02:45.373166 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:02:45.373180 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:02:45.373194 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-08 03:02:45.373207 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:02:45.373272 | orchestrator | 2026-02-08 03:02:45.373285 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-08 03:02:45.373297 | orchestrator | Sunday 08 February 2026 03:01:50 +0000 (0:00:00.331) 0:04:14.790 ******* 2026-02-08 03:02:45.373312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:02:45.373325 | orchestrator | 2026-02-08 03:02:45.373340 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-08 03:02:45.373356 | orchestrator | Sunday 08 February 2026 03:01:50 +0000 (0:00:00.407) 0:04:15.197 ******* 2026-02-08 03:02:45.373368 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:02:45.373382 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:02:45.373395 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:02:45.373410 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:02:45.373435 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:02:45.373448 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:02:45.373461 | orchestrator | changed: [testbed-manager] 2026-02-08 03:02:45.373474 | orchestrator | 2026-02-08 03:02:45.373487 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-08 03:02:45.373499 | orchestrator | Sunday 08 February 2026 03:02:23 +0000 (0:00:32.842) 0:04:48.040 ******* 2026-02-08 03:02:45.373512 | orchestrator | changed: [testbed-manager] 2026-02-08 03:02:45.373527 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:02:45.373542 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:02:45.373556 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:02:45.373570 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:02:45.373584 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:02:45.373598 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:02:45.373612 | orchestrator | 2026-02-08 03:02:45.373625 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-08 03:02:45.373639 | orchestrator | Sunday 08 February 2026 03:02:30 +0000 (0:00:07.199) 0:04:55.239 ******* 2026-02-08 03:02:45.373652 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:02:45.373666 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:02:45.373679 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:02:45.373692 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:02:45.373706 | orchestrator | changed: [testbed-manager] 2026-02-08 03:02:45.373719 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:02:45.373732 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:02:45.373746 | orchestrator | 2026-02-08 03:02:45.373759 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-08 03:02:45.373785 | orchestrator | Sunday 08 February 2026 03:02:38 +0000 (0:00:07.335) 0:05:02.574 ******* 2026-02-08 03:02:45.373799 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:45.373812 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:45.373824 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:45.373837 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:45.373851 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:45.373864 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:45.373877 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:45.373890 | orchestrator | 2026-02-08 03:02:45.373904 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-08 03:02:45.373918 | orchestrator | Sunday 08 February 2026 03:02:39 +0000 (0:00:01.616) 0:05:04.191 ******* 2026-02-08 03:02:45.373933 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:02:45.373947 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:02:45.373960 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:02:45.373974 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:02:45.373987 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:02:45.374001 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:02:45.374014 | orchestrator | changed: [testbed-manager] 2026-02-08 03:02:45.374089 | orchestrator | 2026-02-08 03:02:45.374122 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-08 03:02:56.130454 | orchestrator | Sunday 08 February 2026 03:02:45 +0000 (0:00:05.478) 0:05:09.669 ******* 2026-02-08 03:02:56.130546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:02:56.130556 | orchestrator | 2026-02-08 03:02:56.130562 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-08 03:02:56.130610 | orchestrator | Sunday 08 February 2026 03:02:45 +0000 (0:00:00.410) 0:05:10.079 ******* 2026-02-08 03:02:56.130618 | orchestrator | changed: [testbed-manager] 2026-02-08 03:02:56.130624 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:02:56.130629 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:02:56.130634 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:02:56.130640 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:02:56.130645 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:02:56.130650 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:02:56.130654 | orchestrator | 2026-02-08 03:02:56.130659 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-08 03:02:56.130664 | orchestrator | Sunday 08 February 2026 03:02:46 +0000 (0:00:00.698) 0:05:10.778 ******* 2026-02-08 03:02:56.130669 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:56.130675 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:56.130679 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:56.130684 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:56.130688 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:56.130693 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:56.130697 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:56.130702 | orchestrator | 2026-02-08 03:02:56.130707 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-08 03:02:56.130711 | orchestrator | Sunday 08 February 2026 03:02:48 +0000 (0:00:01.614) 0:05:12.393 ******* 2026-02-08 03:02:56.130716 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:02:56.130720 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:02:56.130725 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:02:56.130729 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:02:56.130734 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:02:56.130739 | orchestrator | changed: [testbed-manager] 2026-02-08 03:02:56.130747 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:02:56.130754 | orchestrator | 2026-02-08 03:02:56.130761 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-08 03:02:56.130772 | orchestrator | Sunday 08 February 2026 03:02:48 +0000 (0:00:00.757) 0:05:13.150 ******* 2026-02-08 03:02:56.130803 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:02:56.130810 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:02:56.130817 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:02:56.130824 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:02:56.130831 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:02:56.130838 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:02:56.130845 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:02:56.130852 | orchestrator | 2026-02-08 03:02:56.130859 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-08 03:02:56.130866 | orchestrator | Sunday 08 February 2026 03:02:49 +0000 (0:00:00.298) 0:05:13.449 ******* 2026-02-08 03:02:56.130873 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:02:56.130880 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:02:56.130887 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:02:56.130907 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:02:56.130915 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:02:56.130922 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:02:56.130929 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:02:56.130937 | orchestrator | 2026-02-08 03:02:56.130944 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-08 03:02:56.130951 | orchestrator | Sunday 08 February 2026 03:02:49 +0000 (0:00:00.365) 0:05:13.815 ******* 2026-02-08 03:02:56.130958 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:56.130965 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:56.130972 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:56.130980 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:56.130987 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:56.130995 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:56.131003 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:56.131010 | orchestrator | 2026-02-08 03:02:56.131018 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-08 03:02:56.131026 | orchestrator | Sunday 08 February 2026 03:02:49 +0000 (0:00:00.323) 0:05:14.138 ******* 2026-02-08 03:02:56.131033 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:02:56.131041 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:02:56.131048 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:02:56.131056 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:02:56.131063 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:02:56.131070 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:02:56.131079 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:02:56.131086 | orchestrator | 2026-02-08 03:02:56.131094 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-08 03:02:56.131104 | orchestrator | Sunday 08 February 2026 03:02:50 +0000 (0:00:00.268) 0:05:14.406 ******* 2026-02-08 03:02:56.131112 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:56.131119 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:56.131127 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:56.131136 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:56.131144 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:56.131151 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:56.131158 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:56.131166 | orchestrator | 2026-02-08 03:02:56.131173 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-08 03:02:56.131181 | orchestrator | Sunday 08 February 2026 03:02:50 +0000 (0:00:00.313) 0:05:14.720 ******* 2026-02-08 03:02:56.131189 | orchestrator | ok: [testbed-manager] =>  2026-02-08 03:02:56.131196 | orchestrator |  docker_version: 5:27.5.1 2026-02-08 03:02:56.131203 | orchestrator | ok: [testbed-node-3] =>  2026-02-08 03:02:56.131253 | orchestrator |  docker_version: 5:27.5.1 2026-02-08 03:02:56.131261 | orchestrator | ok: [testbed-node-4] =>  2026-02-08 03:02:56.131269 | orchestrator |  docker_version: 5:27.5.1 2026-02-08 03:02:56.131277 | orchestrator | ok: [testbed-node-5] =>  2026-02-08 03:02:56.131285 | orchestrator |  docker_version: 5:27.5.1 2026-02-08 03:02:56.131311 | orchestrator | ok: [testbed-node-0] =>  2026-02-08 03:02:56.131328 | orchestrator |  docker_version: 5:27.5.1 2026-02-08 03:02:56.131334 | orchestrator | ok: [testbed-node-1] =>  2026-02-08 03:02:56.131340 | orchestrator |  docker_version: 5:27.5.1 2026-02-08 03:02:56.131345 | orchestrator | ok: [testbed-node-2] =>  2026-02-08 03:02:56.131351 | orchestrator |  docker_version: 5:27.5.1 2026-02-08 03:02:56.131357 | orchestrator | 2026-02-08 03:02:56.131362 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-08 03:02:56.131369 | orchestrator | Sunday 08 February 2026 03:02:50 +0000 (0:00:00.288) 0:05:15.008 ******* 2026-02-08 03:02:56.131374 | orchestrator | ok: [testbed-manager] =>  2026-02-08 03:02:56.131380 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-08 03:02:56.131386 | orchestrator | ok: [testbed-node-3] =>  2026-02-08 03:02:56.131390 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-08 03:02:56.131395 | orchestrator | ok: [testbed-node-4] =>  2026-02-08 03:02:56.131399 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-08 03:02:56.131404 | orchestrator | ok: [testbed-node-5] =>  2026-02-08 03:02:56.131408 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-08 03:02:56.131413 | orchestrator | ok: [testbed-node-0] =>  2026-02-08 03:02:56.131417 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-08 03:02:56.131421 | orchestrator | ok: [testbed-node-1] =>  2026-02-08 03:02:56.131426 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-08 03:02:56.131431 | orchestrator | ok: [testbed-node-2] =>  2026-02-08 03:02:56.131435 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-08 03:02:56.131440 | orchestrator | 2026-02-08 03:02:56.131447 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-08 03:02:56.131454 | orchestrator | Sunday 08 February 2026 03:02:51 +0000 (0:00:00.339) 0:05:15.348 ******* 2026-02-08 03:02:56.131462 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:02:56.131522 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:02:56.131528 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:02:56.131533 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:02:56.131537 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:02:56.131542 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:02:56.131546 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:02:56.131551 | orchestrator | 2026-02-08 03:02:56.131555 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-08 03:02:56.131560 | orchestrator | Sunday 08 February 2026 03:02:51 +0000 (0:00:00.288) 0:05:15.637 ******* 2026-02-08 03:02:56.131564 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:02:56.131569 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:02:56.131573 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:02:56.131578 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:02:56.131582 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:02:56.131587 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:02:56.131591 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:02:56.131596 | orchestrator | 2026-02-08 03:02:56.131601 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-08 03:02:56.131605 | orchestrator | Sunday 08 February 2026 03:02:51 +0000 (0:00:00.281) 0:05:15.918 ******* 2026-02-08 03:02:56.131611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:02:56.131618 | orchestrator | 2026-02-08 03:02:56.131629 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-08 03:02:56.131634 | orchestrator | Sunday 08 February 2026 03:02:52 +0000 (0:00:00.411) 0:05:16.329 ******* 2026-02-08 03:02:56.131639 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:56.131643 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:56.131648 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:56.131652 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:56.131657 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:56.131668 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:56.131672 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:56.131677 | orchestrator | 2026-02-08 03:02:56.131681 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-08 03:02:56.131686 | orchestrator | Sunday 08 February 2026 03:02:52 +0000 (0:00:00.960) 0:05:17.289 ******* 2026-02-08 03:02:56.131690 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:02:56.131695 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:02:56.131700 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:02:56.131704 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:02:56.131708 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:02:56.131713 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:02:56.131718 | orchestrator | ok: [testbed-manager] 2026-02-08 03:02:56.131722 | orchestrator | 2026-02-08 03:02:56.131727 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-08 03:02:56.131732 | orchestrator | Sunday 08 February 2026 03:02:55 +0000 (0:00:02.757) 0:05:20.047 ******* 2026-02-08 03:02:56.131738 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-08 03:02:56.131746 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-08 03:02:56.131753 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-08 03:02:56.131760 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-08 03:02:56.131767 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-08 03:02:56.131777 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-08 03:02:56.131786 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:02:56.131794 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-08 03:02:56.131800 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:02:56.131807 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-08 03:02:56.131814 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-08 03:02:56.131821 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-08 03:02:56.131827 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-08 03:02:56.131835 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-08 03:02:56.131843 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:02:56.131848 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-08 03:02:56.131863 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-08 03:03:54.110103 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-08 03:03:54.110280 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:03:54.110304 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-08 03:03:54.110320 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-08 03:03:54.110334 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-08 03:03:54.110349 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:03:54.110363 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:03:54.110379 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-08 03:03:54.110410 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-08 03:03:54.110439 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-08 03:03:54.110454 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:03:54.110470 | orchestrator | 2026-02-08 03:03:54.110488 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-08 03:03:54.110505 | orchestrator | Sunday 08 February 2026 03:02:56 +0000 (0:00:00.604) 0:05:20.652 ******* 2026-02-08 03:03:54.110521 | orchestrator | ok: [testbed-manager] 2026-02-08 03:03:54.110537 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.110554 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.110569 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.110586 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.110602 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.110649 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.110666 | orchestrator | 2026-02-08 03:03:54.110683 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-08 03:03:54.110700 | orchestrator | Sunday 08 February 2026 03:03:02 +0000 (0:00:06.238) 0:05:26.891 ******* 2026-02-08 03:03:54.110717 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.110731 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.110746 | orchestrator | ok: [testbed-manager] 2026-02-08 03:03:54.110761 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.110774 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.110788 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.110803 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.110817 | orchestrator | 2026-02-08 03:03:54.110832 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-08 03:03:54.110848 | orchestrator | Sunday 08 February 2026 03:03:03 +0000 (0:00:01.068) 0:05:27.959 ******* 2026-02-08 03:03:54.110862 | orchestrator | ok: [testbed-manager] 2026-02-08 03:03:54.110876 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.110891 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.110906 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.110920 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.110934 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.110943 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.110952 | orchestrator | 2026-02-08 03:03:54.110961 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-08 03:03:54.110970 | orchestrator | Sunday 08 February 2026 03:03:11 +0000 (0:00:07.583) 0:05:35.543 ******* 2026-02-08 03:03:54.110978 | orchestrator | changed: [testbed-manager] 2026-02-08 03:03:54.110987 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.110995 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.111004 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.111012 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.111021 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.111046 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.111055 | orchestrator | 2026-02-08 03:03:54.111064 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-08 03:03:54.111073 | orchestrator | Sunday 08 February 2026 03:03:14 +0000 (0:00:03.180) 0:05:38.723 ******* 2026-02-08 03:03:54.111082 | orchestrator | ok: [testbed-manager] 2026-02-08 03:03:54.111090 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.111099 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.111107 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.111116 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.111124 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.111133 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.111142 | orchestrator | 2026-02-08 03:03:54.111150 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-08 03:03:54.111159 | orchestrator | Sunday 08 February 2026 03:03:15 +0000 (0:00:01.461) 0:05:40.185 ******* 2026-02-08 03:03:54.111167 | orchestrator | ok: [testbed-manager] 2026-02-08 03:03:54.111176 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.111184 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.111192 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.111201 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.111305 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.111318 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.111327 | orchestrator | 2026-02-08 03:03:54.111336 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-08 03:03:54.111345 | orchestrator | Sunday 08 February 2026 03:03:17 +0000 (0:00:01.497) 0:05:41.682 ******* 2026-02-08 03:03:54.111353 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:03:54.111362 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:03:54.111370 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:03:54.111379 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:03:54.111400 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:03:54.111408 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:03:54.111417 | orchestrator | changed: [testbed-manager] 2026-02-08 03:03:54.111425 | orchestrator | 2026-02-08 03:03:54.111434 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-08 03:03:54.111443 | orchestrator | Sunday 08 February 2026 03:03:18 +0000 (0:00:00.677) 0:05:42.360 ******* 2026-02-08 03:03:54.111451 | orchestrator | ok: [testbed-manager] 2026-02-08 03:03:54.111460 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.111468 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.111477 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.111485 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.111493 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.111502 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.111510 | orchestrator | 2026-02-08 03:03:54.111519 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-08 03:03:54.111551 | orchestrator | Sunday 08 February 2026 03:03:26 +0000 (0:00:08.945) 0:05:51.306 ******* 2026-02-08 03:03:54.111560 | orchestrator | changed: [testbed-manager] 2026-02-08 03:03:54.111568 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.111577 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.111585 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.111594 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.111602 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.111611 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.111619 | orchestrator | 2026-02-08 03:03:54.111628 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-08 03:03:54.111637 | orchestrator | Sunday 08 February 2026 03:03:27 +0000 (0:00:00.914) 0:05:52.220 ******* 2026-02-08 03:03:54.111645 | orchestrator | ok: [testbed-manager] 2026-02-08 03:03:54.111654 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.111663 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.111671 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.111680 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.111688 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.111697 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.111705 | orchestrator | 2026-02-08 03:03:54.111714 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-08 03:03:54.111722 | orchestrator | Sunday 08 February 2026 03:03:37 +0000 (0:00:09.139) 0:06:01.359 ******* 2026-02-08 03:03:54.111731 | orchestrator | ok: [testbed-manager] 2026-02-08 03:03:54.111739 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.111748 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.111756 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.111765 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.111773 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.111782 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.111790 | orchestrator | 2026-02-08 03:03:54.111799 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-08 03:03:54.111807 | orchestrator | Sunday 08 February 2026 03:03:47 +0000 (0:00:10.467) 0:06:11.826 ******* 2026-02-08 03:03:54.111816 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-08 03:03:54.111825 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-08 03:03:54.111834 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-08 03:03:54.111842 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-08 03:03:54.111851 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-08 03:03:54.111859 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-08 03:03:54.111868 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-08 03:03:54.111876 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-08 03:03:54.111884 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-08 03:03:54.111898 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-08 03:03:54.111907 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-08 03:03:54.111915 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-08 03:03:54.111924 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-08 03:03:54.111932 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-08 03:03:54.111941 | orchestrator | 2026-02-08 03:03:54.111949 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-08 03:03:54.111958 | orchestrator | Sunday 08 February 2026 03:03:48 +0000 (0:00:01.234) 0:06:13.061 ******* 2026-02-08 03:03:54.111967 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:03:54.111975 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:03:54.111984 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:03:54.111992 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:03:54.112038 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:03:54.112048 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:03:54.112056 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:03:54.112065 | orchestrator | 2026-02-08 03:03:54.112073 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-08 03:03:54.112082 | orchestrator | Sunday 08 February 2026 03:03:49 +0000 (0:00:00.587) 0:06:13.649 ******* 2026-02-08 03:03:54.112091 | orchestrator | ok: [testbed-manager] 2026-02-08 03:03:54.112099 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:03:54.112108 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:03:54.112116 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:03:54.112125 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:03:54.112133 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:03:54.112142 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:03:54.112150 | orchestrator | 2026-02-08 03:03:54.112159 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-08 03:03:54.112169 | orchestrator | Sunday 08 February 2026 03:03:53 +0000 (0:00:03.701) 0:06:17.350 ******* 2026-02-08 03:03:54.112178 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:03:54.112186 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:03:54.112195 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:03:54.112203 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:03:54.112234 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:03:54.112243 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:03:54.112252 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:03:54.112261 | orchestrator | 2026-02-08 03:03:54.112271 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-08 03:03:54.112280 | orchestrator | Sunday 08 February 2026 03:03:53 +0000 (0:00:00.548) 0:06:17.899 ******* 2026-02-08 03:03:54.112289 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-08 03:03:54.112297 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-08 03:03:54.112306 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:03:54.112315 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-08 03:03:54.112324 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-08 03:03:54.112332 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:03:54.112341 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-08 03:03:54.112349 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-08 03:03:54.112358 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:03:54.112373 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-08 03:04:13.795965 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-08 03:04:13.796056 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:04:13.796066 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-08 03:04:13.796073 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-08 03:04:13.796080 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:04:13.796107 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-08 03:04:13.796114 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-08 03:04:13.796120 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:04:13.796127 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-08 03:04:13.796142 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-08 03:04:13.796148 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:04:13.796155 | orchestrator | 2026-02-08 03:04:13.796163 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-08 03:04:13.796171 | orchestrator | Sunday 08 February 2026 03:03:54 +0000 (0:00:00.817) 0:06:18.717 ******* 2026-02-08 03:04:13.796177 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:04:13.796183 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:04:13.796190 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:04:13.796196 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:04:13.796202 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:04:13.796271 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:04:13.796279 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:04:13.796286 | orchestrator | 2026-02-08 03:04:13.796292 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-08 03:04:13.796299 | orchestrator | Sunday 08 February 2026 03:03:54 +0000 (0:00:00.551) 0:06:19.268 ******* 2026-02-08 03:04:13.796306 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:04:13.796312 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:04:13.796318 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:04:13.796324 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:04:13.796330 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:04:13.796337 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:04:13.796343 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:04:13.796349 | orchestrator | 2026-02-08 03:04:13.796355 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-08 03:04:13.796362 | orchestrator | Sunday 08 February 2026 03:03:55 +0000 (0:00:00.517) 0:06:19.786 ******* 2026-02-08 03:04:13.796368 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:04:13.796374 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:04:13.796380 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:04:13.796387 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:04:13.796393 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:04:13.796399 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:04:13.796405 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:04:13.796412 | orchestrator | 2026-02-08 03:04:13.796418 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-08 03:04:13.796424 | orchestrator | Sunday 08 February 2026 03:03:56 +0000 (0:00:00.547) 0:06:20.334 ******* 2026-02-08 03:04:13.796431 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:13.796437 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:13.796444 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:13.796450 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:13.796456 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:13.796463 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:13.796469 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:13.796475 | orchestrator | 2026-02-08 03:04:13.796482 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-08 03:04:13.796488 | orchestrator | Sunday 08 February 2026 03:03:57 +0000 (0:00:01.907) 0:06:22.241 ******* 2026-02-08 03:04:13.796496 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:04:13.796505 | orchestrator | 2026-02-08 03:04:13.796513 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-08 03:04:13.796520 | orchestrator | Sunday 08 February 2026 03:03:58 +0000 (0:00:00.970) 0:06:23.212 ******* 2026-02-08 03:04:13.796538 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:13.796546 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:13.796554 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:13.796562 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:13.796569 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:13.796577 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:13.796584 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:04:13.796592 | orchestrator | 2026-02-08 03:04:13.796600 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-08 03:04:13.796607 | orchestrator | Sunday 08 February 2026 03:03:59 +0000 (0:00:00.863) 0:06:24.076 ******* 2026-02-08 03:04:13.796615 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:13.796622 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:13.796631 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:13.796642 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:13.796653 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:13.796662 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:13.796672 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:04:13.796683 | orchestrator | 2026-02-08 03:04:13.796692 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-08 03:04:13.796704 | orchestrator | Sunday 08 February 2026 03:04:00 +0000 (0:00:00.876) 0:06:24.952 ******* 2026-02-08 03:04:13.796714 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:13.796725 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:13.796735 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:13.796746 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:13.796752 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:13.796759 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:13.796765 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:04:13.796771 | orchestrator | 2026-02-08 03:04:13.796777 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-08 03:04:13.796798 | orchestrator | Sunday 08 February 2026 03:04:02 +0000 (0:00:01.562) 0:06:26.515 ******* 2026-02-08 03:04:13.796808 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:04:13.796824 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:13.796836 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:13.796846 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:13.796856 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:13.796865 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:13.796874 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:13.796884 | orchestrator | 2026-02-08 03:04:13.796893 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-08 03:04:13.796917 | orchestrator | Sunday 08 February 2026 03:04:03 +0000 (0:00:01.403) 0:06:27.918 ******* 2026-02-08 03:04:13.796927 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:13.796946 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:13.796956 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:13.796967 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:13.796977 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:13.796988 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:13.796996 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:04:13.797003 | orchestrator | 2026-02-08 03:04:13.797009 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-08 03:04:13.797015 | orchestrator | Sunday 08 February 2026 03:04:04 +0000 (0:00:01.331) 0:06:29.250 ******* 2026-02-08 03:04:13.797022 | orchestrator | changed: [testbed-manager] 2026-02-08 03:04:13.797028 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:13.797034 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:13.797040 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:13.797046 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:13.797052 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:13.797058 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:04:13.797064 | orchestrator | 2026-02-08 03:04:13.797078 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-08 03:04:13.797084 | orchestrator | Sunday 08 February 2026 03:04:06 +0000 (0:00:01.385) 0:06:30.636 ******* 2026-02-08 03:04:13.797091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:04:13.797098 | orchestrator | 2026-02-08 03:04:13.797105 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-08 03:04:13.797111 | orchestrator | Sunday 08 February 2026 03:04:07 +0000 (0:00:01.092) 0:06:31.728 ******* 2026-02-08 03:04:13.797117 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:13.797123 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:13.797129 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:13.797136 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:13.797142 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:13.797148 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:13.797154 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:13.797160 | orchestrator | 2026-02-08 03:04:13.797166 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-08 03:04:13.797173 | orchestrator | Sunday 08 February 2026 03:04:08 +0000 (0:00:01.348) 0:06:33.077 ******* 2026-02-08 03:04:13.797179 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:13.797185 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:13.797191 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:13.797197 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:13.797203 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:13.797243 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:13.797250 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:13.797257 | orchestrator | 2026-02-08 03:04:13.797263 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-08 03:04:13.797269 | orchestrator | Sunday 08 February 2026 03:04:09 +0000 (0:00:01.178) 0:06:34.256 ******* 2026-02-08 03:04:13.797276 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:13.797282 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:13.797288 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:13.797294 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:13.797300 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:13.797306 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:13.797312 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:13.797318 | orchestrator | 2026-02-08 03:04:13.797325 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-08 03:04:13.797331 | orchestrator | Sunday 08 February 2026 03:04:11 +0000 (0:00:01.144) 0:06:35.401 ******* 2026-02-08 03:04:13.797337 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:13.797343 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:13.797349 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:13.797355 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:13.797361 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:13.797367 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:13.797373 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:13.797379 | orchestrator | 2026-02-08 03:04:13.797385 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-08 03:04:13.797392 | orchestrator | Sunday 08 February 2026 03:04:12 +0000 (0:00:01.393) 0:06:36.794 ******* 2026-02-08 03:04:13.797398 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:04:13.797404 | orchestrator | 2026-02-08 03:04:13.797410 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-08 03:04:13.797416 | orchestrator | Sunday 08 February 2026 03:04:13 +0000 (0:00:00.945) 0:06:37.740 ******* 2026-02-08 03:04:13.797422 | orchestrator | 2026-02-08 03:04:13.797429 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-08 03:04:13.797440 | orchestrator | Sunday 08 February 2026 03:04:13 +0000 (0:00:00.043) 0:06:37.784 ******* 2026-02-08 03:04:13.797446 | orchestrator | 2026-02-08 03:04:13.797452 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-08 03:04:13.797458 | orchestrator | Sunday 08 February 2026 03:04:13 +0000 (0:00:00.059) 0:06:37.843 ******* 2026-02-08 03:04:13.797464 | orchestrator | 2026-02-08 03:04:13.797471 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-08 03:04:13.797484 | orchestrator | Sunday 08 February 2026 03:04:13 +0000 (0:00:00.044) 0:06:37.888 ******* 2026-02-08 03:04:39.725103 | orchestrator | 2026-02-08 03:04:39.725437 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-08 03:04:39.725470 | orchestrator | Sunday 08 February 2026 03:04:13 +0000 (0:00:00.049) 0:06:37.937 ******* 2026-02-08 03:04:39.725481 | orchestrator | 2026-02-08 03:04:39.725492 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-08 03:04:39.725502 | orchestrator | Sunday 08 February 2026 03:04:13 +0000 (0:00:00.049) 0:06:37.987 ******* 2026-02-08 03:04:39.725511 | orchestrator | 2026-02-08 03:04:39.725521 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-08 03:04:39.725531 | orchestrator | Sunday 08 February 2026 03:04:13 +0000 (0:00:00.045) 0:06:38.033 ******* 2026-02-08 03:04:39.725541 | orchestrator | 2026-02-08 03:04:39.725551 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-08 03:04:39.725566 | orchestrator | Sunday 08 February 2026 03:04:13 +0000 (0:00:00.056) 0:06:38.089 ******* 2026-02-08 03:04:39.725581 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:39.725611 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:39.725627 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:39.725643 | orchestrator | 2026-02-08 03:04:39.725660 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-08 03:04:39.725676 | orchestrator | Sunday 08 February 2026 03:04:14 +0000 (0:00:01.141) 0:06:39.230 ******* 2026-02-08 03:04:39.725692 | orchestrator | changed: [testbed-manager] 2026-02-08 03:04:39.725707 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:39.725722 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:39.725739 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:39.725755 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:39.725772 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:39.725789 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:04:39.725802 | orchestrator | 2026-02-08 03:04:39.725813 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-08 03:04:39.725825 | orchestrator | Sunday 08 February 2026 03:04:16 +0000 (0:00:01.551) 0:06:40.782 ******* 2026-02-08 03:04:39.725837 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:39.725850 | orchestrator | changed: [testbed-manager] 2026-02-08 03:04:39.725862 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:39.725873 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:39.725884 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:39.725896 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:39.725907 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:04:39.725918 | orchestrator | 2026-02-08 03:04:39.725930 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-08 03:04:39.725941 | orchestrator | Sunday 08 February 2026 03:04:17 +0000 (0:00:01.176) 0:06:41.958 ******* 2026-02-08 03:04:39.725953 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:04:39.725965 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:39.725974 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:39.725984 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:39.725994 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:39.726003 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:04:39.726013 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:39.726081 | orchestrator | 2026-02-08 03:04:39.726092 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-08 03:04:39.726102 | orchestrator | Sunday 08 February 2026 03:04:20 +0000 (0:00:02.469) 0:06:44.428 ******* 2026-02-08 03:04:39.726152 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:04:39.726162 | orchestrator | 2026-02-08 03:04:39.726173 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-08 03:04:39.726183 | orchestrator | Sunday 08 February 2026 03:04:20 +0000 (0:00:00.132) 0:06:44.561 ******* 2026-02-08 03:04:39.726192 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:39.726202 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:39.726243 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:39.726260 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:39.726273 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:39.726283 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:39.726292 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:04:39.726302 | orchestrator | 2026-02-08 03:04:39.726311 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-08 03:04:39.726323 | orchestrator | Sunday 08 February 2026 03:04:21 +0000 (0:00:01.047) 0:06:45.609 ******* 2026-02-08 03:04:39.726332 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:04:39.726342 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:04:39.726351 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:04:39.726368 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:04:39.726383 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:04:39.726397 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:04:39.726412 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:04:39.726428 | orchestrator | 2026-02-08 03:04:39.726445 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-08 03:04:39.726462 | orchestrator | Sunday 08 February 2026 03:04:21 +0000 (0:00:00.593) 0:06:46.202 ******* 2026-02-08 03:04:39.726480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:04:39.726497 | orchestrator | 2026-02-08 03:04:39.726507 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-08 03:04:39.726517 | orchestrator | Sunday 08 February 2026 03:04:23 +0000 (0:00:01.178) 0:06:47.380 ******* 2026-02-08 03:04:39.726526 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:39.726536 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:39.726545 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:39.726555 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:39.726564 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:39.726574 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:39.726584 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:39.726593 | orchestrator | 2026-02-08 03:04:39.726603 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-08 03:04:39.726613 | orchestrator | Sunday 08 February 2026 03:04:23 +0000 (0:00:00.859) 0:06:48.240 ******* 2026-02-08 03:04:39.726622 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-08 03:04:39.726655 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-08 03:04:39.726666 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-08 03:04:39.726676 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-08 03:04:39.726685 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-08 03:04:39.726695 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-08 03:04:39.726704 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-08 03:04:39.726714 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-08 03:04:39.726724 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-08 03:04:39.726733 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-08 03:04:39.726742 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-08 03:04:39.726752 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-08 03:04:39.726771 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-08 03:04:39.726781 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-08 03:04:39.726790 | orchestrator | 2026-02-08 03:04:39.726822 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-08 03:04:39.726833 | orchestrator | Sunday 08 February 2026 03:04:26 +0000 (0:00:02.415) 0:06:50.656 ******* 2026-02-08 03:04:39.726843 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:04:39.726852 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:04:39.726862 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:04:39.726871 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:04:39.726881 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:04:39.726890 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:04:39.726900 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:04:39.726909 | orchestrator | 2026-02-08 03:04:39.726919 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-08 03:04:39.726932 | orchestrator | Sunday 08 February 2026 03:04:27 +0000 (0:00:00.869) 0:06:51.525 ******* 2026-02-08 03:04:39.726955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:04:39.726983 | orchestrator | 2026-02-08 03:04:39.727000 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-08 03:04:39.727015 | orchestrator | Sunday 08 February 2026 03:04:28 +0000 (0:00:00.907) 0:06:52.432 ******* 2026-02-08 03:04:39.727031 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:39.727046 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:39.727060 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:39.727076 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:39.727091 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:39.727107 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:39.727124 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:39.727141 | orchestrator | 2026-02-08 03:04:39.727158 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-08 03:04:39.727176 | orchestrator | Sunday 08 February 2026 03:04:29 +0000 (0:00:00.906) 0:06:53.339 ******* 2026-02-08 03:04:39.727203 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:39.727289 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:39.727306 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:39.727323 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:39.727338 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:39.727353 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:39.727366 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:39.727380 | orchestrator | 2026-02-08 03:04:39.727394 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-08 03:04:39.727407 | orchestrator | Sunday 08 February 2026 03:04:30 +0000 (0:00:01.130) 0:06:54.469 ******* 2026-02-08 03:04:39.727421 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:04:39.727434 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:04:39.727447 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:04:39.727460 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:04:39.727473 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:04:39.727486 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:04:39.727500 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:04:39.727514 | orchestrator | 2026-02-08 03:04:39.727526 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-08 03:04:39.727539 | orchestrator | Sunday 08 February 2026 03:04:30 +0000 (0:00:00.550) 0:06:55.020 ******* 2026-02-08 03:04:39.727553 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:39.727567 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:04:39.727580 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:04:39.727592 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:04:39.727600 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:04:39.727617 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:04:39.727625 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:04:39.727633 | orchestrator | 2026-02-08 03:04:39.727641 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-08 03:04:39.727649 | orchestrator | Sunday 08 February 2026 03:04:32 +0000 (0:00:01.444) 0:06:56.464 ******* 2026-02-08 03:04:39.727657 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:04:39.727665 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:04:39.727673 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:04:39.727680 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:04:39.727688 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:04:39.727696 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:04:39.727703 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:04:39.727711 | orchestrator | 2026-02-08 03:04:39.727719 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-08 03:04:39.727727 | orchestrator | Sunday 08 February 2026 03:04:32 +0000 (0:00:00.534) 0:06:56.999 ******* 2026-02-08 03:04:39.727735 | orchestrator | ok: [testbed-manager] 2026-02-08 03:04:39.727743 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:04:39.727751 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:04:39.727759 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:04:39.727767 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:04:39.727774 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:04:39.727793 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:11.597965 | orchestrator | 2026-02-08 03:05:11.598149 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-08 03:05:11.598169 | orchestrator | Sunday 08 February 2026 03:04:39 +0000 (0:00:07.024) 0:07:04.023 ******* 2026-02-08 03:05:11.598181 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.598194 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:11.598206 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:11.598265 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:11.598276 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:11.598287 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:11.598298 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:11.598309 | orchestrator | 2026-02-08 03:05:11.598320 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-08 03:05:11.598332 | orchestrator | Sunday 08 February 2026 03:04:41 +0000 (0:00:01.583) 0:07:05.607 ******* 2026-02-08 03:05:11.598343 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.598354 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:11.598365 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:11.598375 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:11.598386 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:11.598396 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:11.598407 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:11.598418 | orchestrator | 2026-02-08 03:05:11.598429 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-08 03:05:11.598440 | orchestrator | Sunday 08 February 2026 03:04:42 +0000 (0:00:01.668) 0:07:07.276 ******* 2026-02-08 03:05:11.598451 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.598461 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:11.598472 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:11.598483 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:11.598494 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:11.598508 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:11.598521 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:11.598534 | orchestrator | 2026-02-08 03:05:11.598547 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-08 03:05:11.598560 | orchestrator | Sunday 08 February 2026 03:04:44 +0000 (0:00:01.771) 0:07:09.047 ******* 2026-02-08 03:05:11.598573 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.598586 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:11.598599 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:11.598637 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:11.598650 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:11.598663 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:11.598677 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:11.598690 | orchestrator | 2026-02-08 03:05:11.598703 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-08 03:05:11.598717 | orchestrator | Sunday 08 February 2026 03:04:45 +0000 (0:00:00.856) 0:07:09.904 ******* 2026-02-08 03:05:11.598729 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:05:11.598743 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:05:11.598756 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:05:11.598768 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:05:11.598781 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:05:11.598794 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:05:11.598806 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:05:11.598818 | orchestrator | 2026-02-08 03:05:11.598832 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-08 03:05:11.598845 | orchestrator | Sunday 08 February 2026 03:04:46 +0000 (0:00:01.065) 0:07:10.970 ******* 2026-02-08 03:05:11.598856 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:05:11.598867 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:05:11.598877 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:05:11.598888 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:05:11.598899 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:05:11.598909 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:05:11.598920 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:05:11.598930 | orchestrator | 2026-02-08 03:05:11.598941 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-08 03:05:11.598952 | orchestrator | Sunday 08 February 2026 03:04:47 +0000 (0:00:00.532) 0:07:11.503 ******* 2026-02-08 03:05:11.598963 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.598973 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:11.598984 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:11.598995 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:11.599005 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:11.599016 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:11.599026 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:11.599037 | orchestrator | 2026-02-08 03:05:11.599048 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-08 03:05:11.599062 | orchestrator | Sunday 08 February 2026 03:04:47 +0000 (0:00:00.536) 0:07:12.039 ******* 2026-02-08 03:05:11.599080 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.599097 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:11.599115 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:11.599133 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:11.599151 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:11.599188 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:11.599232 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:11.599252 | orchestrator | 2026-02-08 03:05:11.599272 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-08 03:05:11.599291 | orchestrator | Sunday 08 February 2026 03:04:48 +0000 (0:00:00.552) 0:07:12.592 ******* 2026-02-08 03:05:11.599310 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.599392 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:11.599406 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:11.599417 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:11.599428 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:11.599439 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:11.599449 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:11.599460 | orchestrator | 2026-02-08 03:05:11.599471 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-08 03:05:11.599482 | orchestrator | Sunday 08 February 2026 03:04:49 +0000 (0:00:00.770) 0:07:13.362 ******* 2026-02-08 03:05:11.599493 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.599504 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:11.599527 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:11.599538 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:11.599548 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:11.599559 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:11.599569 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:11.599580 | orchestrator | 2026-02-08 03:05:11.599614 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-08 03:05:11.599644 | orchestrator | Sunday 08 February 2026 03:04:54 +0000 (0:00:05.408) 0:07:18.771 ******* 2026-02-08 03:05:11.599655 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:05:11.599666 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:05:11.599677 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:05:11.599688 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:05:11.599699 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:05:11.599709 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:05:11.599720 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:05:11.599731 | orchestrator | 2026-02-08 03:05:11.599742 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-08 03:05:11.599754 | orchestrator | Sunday 08 February 2026 03:04:55 +0000 (0:00:00.623) 0:07:19.395 ******* 2026-02-08 03:05:11.599767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:05:11.599781 | orchestrator | 2026-02-08 03:05:11.599793 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-08 03:05:11.599804 | orchestrator | Sunday 08 February 2026 03:04:56 +0000 (0:00:01.081) 0:07:20.477 ******* 2026-02-08 03:05:11.599815 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:11.599825 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.599836 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:11.599847 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:11.599857 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:11.599868 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:11.599879 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:11.599890 | orchestrator | 2026-02-08 03:05:11.599901 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-08 03:05:11.599912 | orchestrator | Sunday 08 February 2026 03:04:57 +0000 (0:00:01.819) 0:07:22.296 ******* 2026-02-08 03:05:11.599922 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.599933 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:11.599944 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:11.599955 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:11.599966 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:11.599976 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:11.599987 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:11.599998 | orchestrator | 2026-02-08 03:05:11.600008 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-08 03:05:11.600019 | orchestrator | Sunday 08 February 2026 03:04:59 +0000 (0:00:01.163) 0:07:23.460 ******* 2026-02-08 03:05:11.600030 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:11.600041 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:11.600052 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:11.600062 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:11.600073 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:11.600084 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:11.600095 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:11.600105 | orchestrator | 2026-02-08 03:05:11.600116 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-08 03:05:11.600127 | orchestrator | Sunday 08 February 2026 03:05:00 +0000 (0:00:00.874) 0:07:24.335 ******* 2026-02-08 03:05:11.600146 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-08 03:05:11.600159 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-08 03:05:11.600179 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-08 03:05:11.600190 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-08 03:05:11.600201 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-08 03:05:11.600238 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-08 03:05:11.600250 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-08 03:05:11.600261 | orchestrator | 2026-02-08 03:05:11.600272 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-08 03:05:11.600283 | orchestrator | Sunday 08 February 2026 03:05:01 +0000 (0:00:01.905) 0:07:26.240 ******* 2026-02-08 03:05:11.600294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:05:11.600305 | orchestrator | 2026-02-08 03:05:11.600316 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-08 03:05:11.600327 | orchestrator | Sunday 08 February 2026 03:05:02 +0000 (0:00:00.841) 0:07:27.081 ******* 2026-02-08 03:05:11.600338 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:11.600349 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:11.600360 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:11.600371 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:11.600382 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:11.600393 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:11.600404 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:11.600414 | orchestrator | 2026-02-08 03:05:11.600433 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-08 03:05:42.749786 | orchestrator | Sunday 08 February 2026 03:05:11 +0000 (0:00:08.815) 0:07:35.896 ******* 2026-02-08 03:05:42.749903 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:42.749921 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:42.749933 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:42.749944 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:42.749955 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:42.749967 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:42.749978 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:42.749989 | orchestrator | 2026-02-08 03:05:42.750001 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-08 03:05:42.750066 | orchestrator | Sunday 08 February 2026 03:05:13 +0000 (0:00:02.025) 0:07:37.921 ******* 2026-02-08 03:05:42.750080 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:42.750091 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:42.750102 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:42.750113 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:42.750124 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:42.750134 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:42.750145 | orchestrator | 2026-02-08 03:05:42.750156 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-08 03:05:42.750167 | orchestrator | Sunday 08 February 2026 03:05:14 +0000 (0:00:01.260) 0:07:39.181 ******* 2026-02-08 03:05:42.750178 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:42.750190 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:42.750201 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:42.750266 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:42.750293 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:42.750336 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:42.750351 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:42.750364 | orchestrator | 2026-02-08 03:05:42.750378 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-08 03:05:42.750391 | orchestrator | 2026-02-08 03:05:42.750404 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-08 03:05:42.750419 | orchestrator | Sunday 08 February 2026 03:05:16 +0000 (0:00:01.233) 0:07:40.415 ******* 2026-02-08 03:05:42.750432 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:05:42.750445 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:05:42.750458 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:05:42.750470 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:05:42.750483 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:05:42.750495 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:05:42.750508 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:05:42.750522 | orchestrator | 2026-02-08 03:05:42.750535 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-08 03:05:42.750548 | orchestrator | 2026-02-08 03:05:42.750562 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-08 03:05:42.750575 | orchestrator | Sunday 08 February 2026 03:05:16 +0000 (0:00:00.761) 0:07:41.177 ******* 2026-02-08 03:05:42.750589 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:42.750603 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:42.750616 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:42.750630 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:42.750641 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:42.750652 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:42.750663 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:42.750674 | orchestrator | 2026-02-08 03:05:42.750685 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-08 03:05:42.750712 | orchestrator | Sunday 08 February 2026 03:05:18 +0000 (0:00:01.379) 0:07:42.556 ******* 2026-02-08 03:05:42.750724 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:42.750735 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:42.750746 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:42.750756 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:42.750767 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:42.750778 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:42.750789 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:42.750800 | orchestrator | 2026-02-08 03:05:42.750811 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-08 03:05:42.750822 | orchestrator | Sunday 08 February 2026 03:05:19 +0000 (0:00:01.443) 0:07:43.999 ******* 2026-02-08 03:05:42.750833 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:05:42.750844 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:05:42.750855 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:05:42.750866 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:05:42.750877 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:05:42.750888 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:05:42.750898 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:05:42.750909 | orchestrator | 2026-02-08 03:05:42.750920 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-08 03:05:42.750932 | orchestrator | Sunday 08 February 2026 03:05:20 +0000 (0:00:00.547) 0:07:44.547 ******* 2026-02-08 03:05:42.750944 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:05:42.750957 | orchestrator | 2026-02-08 03:05:42.750968 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-08 03:05:42.750979 | orchestrator | Sunday 08 February 2026 03:05:21 +0000 (0:00:01.105) 0:07:45.652 ******* 2026-02-08 03:05:42.750992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:05:42.751022 | orchestrator | 2026-02-08 03:05:42.751033 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-08 03:05:42.751044 | orchestrator | Sunday 08 February 2026 03:05:22 +0000 (0:00:00.848) 0:07:46.500 ******* 2026-02-08 03:05:42.751055 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:42.751066 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:42.751077 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:42.751088 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:42.751099 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:42.751109 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:42.751120 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:42.751131 | orchestrator | 2026-02-08 03:05:42.751161 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-08 03:05:42.751173 | orchestrator | Sunday 08 February 2026 03:05:30 +0000 (0:00:08.186) 0:07:54.687 ******* 2026-02-08 03:05:42.751184 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:42.751195 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:42.751205 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:42.751250 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:42.751262 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:42.751273 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:42.751283 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:42.751294 | orchestrator | 2026-02-08 03:05:42.751305 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-08 03:05:42.751329 | orchestrator | Sunday 08 February 2026 03:05:31 +0000 (0:00:01.043) 0:07:55.731 ******* 2026-02-08 03:05:42.751340 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:42.751351 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:42.751362 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:42.751373 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:42.751383 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:42.751394 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:42.751405 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:42.751416 | orchestrator | 2026-02-08 03:05:42.751426 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-08 03:05:42.751438 | orchestrator | Sunday 08 February 2026 03:05:32 +0000 (0:00:01.375) 0:07:57.106 ******* 2026-02-08 03:05:42.751448 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:42.751459 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:42.751470 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:42.751481 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:42.751491 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:42.751502 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:42.751513 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:42.751524 | orchestrator | 2026-02-08 03:05:42.751535 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-08 03:05:42.751546 | orchestrator | Sunday 08 February 2026 03:05:35 +0000 (0:00:02.559) 0:07:59.665 ******* 2026-02-08 03:05:42.751556 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:42.751567 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:42.751578 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:42.751589 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:42.751599 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:42.751611 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:42.751621 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:42.751632 | orchestrator | 2026-02-08 03:05:42.751643 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-08 03:05:42.751654 | orchestrator | Sunday 08 February 2026 03:05:36 +0000 (0:00:01.229) 0:08:00.895 ******* 2026-02-08 03:05:42.751665 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:42.751676 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:42.751695 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:42.751706 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:42.751717 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:42.751727 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:42.751738 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:42.751749 | orchestrator | 2026-02-08 03:05:42.751760 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-08 03:05:42.751771 | orchestrator | 2026-02-08 03:05:42.751789 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-08 03:05:42.751800 | orchestrator | Sunday 08 February 2026 03:05:37 +0000 (0:00:01.109) 0:08:02.004 ******* 2026-02-08 03:05:42.751812 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:05:42.751823 | orchestrator | 2026-02-08 03:05:42.751834 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-08 03:05:42.751845 | orchestrator | Sunday 08 February 2026 03:05:38 +0000 (0:00:00.865) 0:08:02.869 ******* 2026-02-08 03:05:42.751855 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:42.751867 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:42.751878 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:42.751888 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:42.751899 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:42.751910 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:42.751921 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:42.751932 | orchestrator | 2026-02-08 03:05:42.751942 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-08 03:05:42.751953 | orchestrator | Sunday 08 February 2026 03:05:39 +0000 (0:00:01.057) 0:08:03.926 ******* 2026-02-08 03:05:42.751964 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:42.751975 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:42.751986 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:42.751997 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:42.752008 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:42.752019 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:42.752030 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:42.752041 | orchestrator | 2026-02-08 03:05:42.752052 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-08 03:05:42.752063 | orchestrator | Sunday 08 February 2026 03:05:40 +0000 (0:00:01.189) 0:08:05.116 ******* 2026-02-08 03:05:42.752074 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:05:42.752085 | orchestrator | 2026-02-08 03:05:42.752096 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-08 03:05:42.752114 | orchestrator | Sunday 08 February 2026 03:05:41 +0000 (0:00:01.081) 0:08:06.198 ******* 2026-02-08 03:05:42.752132 | orchestrator | ok: [testbed-manager] 2026-02-08 03:05:42.752150 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:05:42.752168 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:05:42.752186 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:05:42.752200 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:05:42.752256 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:05:42.752269 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:05:42.752280 | orchestrator | 2026-02-08 03:05:42.752309 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-08 03:05:44.443633 | orchestrator | Sunday 08 February 2026 03:05:42 +0000 (0:00:00.853) 0:08:07.052 ******* 2026-02-08 03:05:44.443762 | orchestrator | changed: [testbed-manager] 2026-02-08 03:05:44.443776 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:05:44.443785 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:05:44.444883 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:05:44.444936 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:05:44.444951 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:05:44.444963 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:05:44.445003 | orchestrator | 2026-02-08 03:05:44.445018 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:05:44.445033 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-08 03:05:44.445044 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-08 03:05:44.445055 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-08 03:05:44.445066 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-08 03:05:44.445076 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-08 03:05:44.445087 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-08 03:05:44.445097 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-08 03:05:44.445108 | orchestrator | 2026-02-08 03:05:44.445114 | orchestrator | 2026-02-08 03:05:44.445121 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:05:44.445127 | orchestrator | Sunday 08 February 2026 03:05:43 +0000 (0:00:01.150) 0:08:08.202 ******* 2026-02-08 03:05:44.445133 | orchestrator | =============================================================================== 2026-02-08 03:05:44.445140 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.57s 2026-02-08 03:05:44.445146 | orchestrator | osism.commons.packages : Download required packages -------------------- 45.19s 2026-02-08 03:05:44.445152 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.84s 2026-02-08 03:05:44.445158 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.18s 2026-02-08 03:05:44.445165 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.28s 2026-02-08 03:05:44.445186 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.74s 2026-02-08 03:05:44.445192 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.47s 2026-02-08 03:05:44.445199 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.14s 2026-02-08 03:05:44.445205 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.95s 2026-02-08 03:05:44.445245 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.82s 2026-02-08 03:05:44.445252 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.19s 2026-02-08 03:05:44.445258 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.58s 2026-02-08 03:05:44.445264 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.34s 2026-02-08 03:05:44.445270 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.25s 2026-02-08 03:05:44.445277 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.20s 2026-02-08 03:05:44.445283 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.02s 2026-02-08 03:05:44.445289 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.24s 2026-02-08 03:05:44.445295 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.48s 2026-02-08 03:05:44.445302 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.41s 2026-02-08 03:05:44.445308 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.37s 2026-02-08 03:05:44.795190 | orchestrator | + osism apply fail2ban 2026-02-08 03:05:57.944199 | orchestrator | 2026-02-08 03:05:57 | INFO  | Task c1b8c1bd-2cbb-40cf-ba16-932f4fae8aed (fail2ban) was prepared for execution. 2026-02-08 03:05:57.944349 | orchestrator | 2026-02-08 03:05:57 | INFO  | It takes a moment until task c1b8c1bd-2cbb-40cf-ba16-932f4fae8aed (fail2ban) has been started and output is visible here. 2026-02-08 03:06:20.157200 | orchestrator | 2026-02-08 03:06:20.157351 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-08 03:06:20.157366 | orchestrator | 2026-02-08 03:06:20.157377 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-08 03:06:20.157387 | orchestrator | Sunday 08 February 2026 03:06:02 +0000 (0:00:00.298) 0:00:00.298 ******* 2026-02-08 03:06:20.157399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:06:20.157411 | orchestrator | 2026-02-08 03:06:20.157421 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-08 03:06:20.157431 | orchestrator | Sunday 08 February 2026 03:06:03 +0000 (0:00:01.152) 0:00:01.450 ******* 2026-02-08 03:06:20.157441 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:06:20.157452 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:06:20.157462 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:06:20.157472 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:06:20.157481 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:06:20.157491 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:06:20.157501 | orchestrator | changed: [testbed-manager] 2026-02-08 03:06:20.157512 | orchestrator | 2026-02-08 03:06:20.157522 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-08 03:06:20.157532 | orchestrator | Sunday 08 February 2026 03:06:15 +0000 (0:00:11.381) 0:00:12.831 ******* 2026-02-08 03:06:20.157541 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:06:20.157557 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:06:20.157581 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:06:20.157601 | orchestrator | changed: [testbed-manager] 2026-02-08 03:06:20.157616 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:06:20.157632 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:06:20.157647 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:06:20.157664 | orchestrator | 2026-02-08 03:06:20.157679 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-08 03:06:20.157695 | orchestrator | Sunday 08 February 2026 03:06:16 +0000 (0:00:01.366) 0:00:14.198 ******* 2026-02-08 03:06:20.157711 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:06:20.157729 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:06:20.157745 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:06:20.157760 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:06:20.157777 | orchestrator | ok: [testbed-manager] 2026-02-08 03:06:20.157793 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:06:20.157809 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:06:20.157826 | orchestrator | 2026-02-08 03:06:20.157843 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-08 03:06:20.157860 | orchestrator | Sunday 08 February 2026 03:06:18 +0000 (0:00:01.430) 0:00:15.629 ******* 2026-02-08 03:06:20.157877 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:06:20.157893 | orchestrator | changed: [testbed-manager] 2026-02-08 03:06:20.157910 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:06:20.157926 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:06:20.157943 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:06:20.157960 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:06:20.157977 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:06:20.157995 | orchestrator | 2026-02-08 03:06:20.158011 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:06:20.158093 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:06:20.158135 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:06:20.158161 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:06:20.158171 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:06:20.158181 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:06:20.158190 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:06:20.158199 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:06:20.158236 | orchestrator | 2026-02-08 03:06:20.158248 | orchestrator | 2026-02-08 03:06:20.158257 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:06:20.158267 | orchestrator | Sunday 08 February 2026 03:06:19 +0000 (0:00:01.603) 0:00:17.233 ******* 2026-02-08 03:06:20.158277 | orchestrator | =============================================================================== 2026-02-08 03:06:20.158287 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.38s 2026-02-08 03:06:20.158296 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.60s 2026-02-08 03:06:20.158306 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.43s 2026-02-08 03:06:20.158315 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.37s 2026-02-08 03:06:20.158325 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.15s 2026-02-08 03:06:20.528388 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-08 03:06:20.528503 | orchestrator | + osism apply network 2026-02-08 03:06:32.649347 | orchestrator | 2026-02-08 03:06:32 | INFO  | Task e0dcebe3-41b1-423b-97df-2a8a726e230c (network) was prepared for execution. 2026-02-08 03:06:32.649455 | orchestrator | 2026-02-08 03:06:32 | INFO  | It takes a moment until task e0dcebe3-41b1-423b-97df-2a8a726e230c (network) has been started and output is visible here. 2026-02-08 03:07:01.531956 | orchestrator | 2026-02-08 03:07:01.532056 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-08 03:07:01.532081 | orchestrator | 2026-02-08 03:07:01.532091 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-08 03:07:01.532100 | orchestrator | Sunday 08 February 2026 03:06:37 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-02-08 03:07:01.532109 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:01.532120 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:07:01.532128 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:07:01.532137 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:07:01.532146 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:07:01.532154 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:07:01.532162 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:07:01.532171 | orchestrator | 2026-02-08 03:07:01.532180 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-08 03:07:01.532189 | orchestrator | Sunday 08 February 2026 03:06:37 +0000 (0:00:00.751) 0:00:01.029 ******* 2026-02-08 03:07:01.532199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:07:01.532249 | orchestrator | 2026-02-08 03:07:01.532260 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-08 03:07:01.532293 | orchestrator | Sunday 08 February 2026 03:06:39 +0000 (0:00:01.333) 0:00:02.363 ******* 2026-02-08 03:07:01.532301 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:07:01.532310 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:07:01.532318 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:01.532327 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:07:01.532335 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:07:01.532344 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:07:01.532353 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:07:01.532361 | orchestrator | 2026-02-08 03:07:01.532370 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-08 03:07:01.532378 | orchestrator | Sunday 08 February 2026 03:06:41 +0000 (0:00:01.948) 0:00:04.311 ******* 2026-02-08 03:07:01.532387 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:01.532395 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:07:01.532404 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:07:01.532413 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:07:01.532421 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:07:01.532429 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:07:01.532438 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:07:01.532446 | orchestrator | 2026-02-08 03:07:01.532455 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-08 03:07:01.532463 | orchestrator | Sunday 08 February 2026 03:06:42 +0000 (0:00:01.628) 0:00:05.940 ******* 2026-02-08 03:07:01.532472 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-08 03:07:01.532481 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-08 03:07:01.532489 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-08 03:07:01.532497 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-08 03:07:01.532506 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-08 03:07:01.532515 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-08 03:07:01.532524 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-08 03:07:01.532533 | orchestrator | 2026-02-08 03:07:01.532542 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-08 03:07:01.532552 | orchestrator | Sunday 08 February 2026 03:06:43 +0000 (0:00:00.983) 0:00:06.924 ******* 2026-02-08 03:07:01.532560 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 03:07:01.532570 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 03:07:01.532579 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-08 03:07:01.532587 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 03:07:01.532595 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 03:07:01.532603 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-08 03:07:01.532611 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 03:07:01.532619 | orchestrator | 2026-02-08 03:07:01.532628 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-08 03:07:01.532653 | orchestrator | Sunday 08 February 2026 03:06:47 +0000 (0:00:03.414) 0:00:10.338 ******* 2026-02-08 03:07:01.532661 | orchestrator | changed: [testbed-manager] 2026-02-08 03:07:01.532670 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:07:01.532678 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:07:01.532708 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:07:01.532716 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:07:01.532734 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:07:01.532743 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:07:01.532763 | orchestrator | 2026-02-08 03:07:01.532772 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-08 03:07:01.532781 | orchestrator | Sunday 08 February 2026 03:06:48 +0000 (0:00:01.630) 0:00:11.969 ******* 2026-02-08 03:07:01.532789 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 03:07:01.532798 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 03:07:01.532807 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-08 03:07:01.532816 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-08 03:07:01.532834 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 03:07:01.532844 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 03:07:01.532853 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 03:07:01.532862 | orchestrator | 2026-02-08 03:07:01.532871 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-08 03:07:01.532879 | orchestrator | Sunday 08 February 2026 03:06:50 +0000 (0:00:01.735) 0:00:13.705 ******* 2026-02-08 03:07:01.532888 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:01.532896 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:07:01.532905 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:07:01.532913 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:07:01.532922 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:07:01.532930 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:07:01.532938 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:07:01.532947 | orchestrator | 2026-02-08 03:07:01.532955 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-08 03:07:01.532981 | orchestrator | Sunday 08 February 2026 03:06:51 +0000 (0:00:01.181) 0:00:14.887 ******* 2026-02-08 03:07:01.532991 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:07:01.532999 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:07:01.533007 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:07:01.533016 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:07:01.533025 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:07:01.533033 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:07:01.533041 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:07:01.533050 | orchestrator | 2026-02-08 03:07:01.533057 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-08 03:07:01.533062 | orchestrator | Sunday 08 February 2026 03:06:52 +0000 (0:00:00.672) 0:00:15.559 ******* 2026-02-08 03:07:01.533067 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:01.533072 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:07:01.533077 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:07:01.533082 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:07:01.533087 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:07:01.533092 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:07:01.533097 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:07:01.533102 | orchestrator | 2026-02-08 03:07:01.533107 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-08 03:07:01.533113 | orchestrator | Sunday 08 February 2026 03:06:54 +0000 (0:00:02.104) 0:00:17.664 ******* 2026-02-08 03:07:01.533118 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:07:01.533123 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:07:01.533128 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:07:01.533133 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:07:01.533138 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:07:01.533143 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:07:01.533149 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-08 03:07:01.533155 | orchestrator | 2026-02-08 03:07:01.533161 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-08 03:07:01.533166 | orchestrator | Sunday 08 February 2026 03:06:55 +0000 (0:00:00.954) 0:00:18.618 ******* 2026-02-08 03:07:01.533171 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:01.533176 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:07:01.533181 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:07:01.533186 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:07:01.533191 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:07:01.533196 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:07:01.533201 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:07:01.533206 | orchestrator | 2026-02-08 03:07:01.533230 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-08 03:07:01.533236 | orchestrator | Sunday 08 February 2026 03:06:57 +0000 (0:00:01.675) 0:00:20.293 ******* 2026-02-08 03:07:01.533241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:07:01.533253 | orchestrator | 2026-02-08 03:07:01.533259 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-08 03:07:01.533264 | orchestrator | Sunday 08 February 2026 03:06:58 +0000 (0:00:01.324) 0:00:21.617 ******* 2026-02-08 03:07:01.533269 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:07:01.533274 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:01.533279 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:07:01.533284 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:07:01.533293 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:07:01.533299 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:07:01.533304 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:07:01.533309 | orchestrator | 2026-02-08 03:07:01.533314 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-08 03:07:01.533319 | orchestrator | Sunday 08 February 2026 03:06:59 +0000 (0:00:01.178) 0:00:22.795 ******* 2026-02-08 03:07:01.533324 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:01.533329 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:07:01.533334 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:07:01.533339 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:07:01.533344 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:07:01.533349 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:07:01.533353 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:07:01.533358 | orchestrator | 2026-02-08 03:07:01.533363 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-08 03:07:01.533368 | orchestrator | Sunday 08 February 2026 03:07:00 +0000 (0:00:00.658) 0:00:23.454 ******* 2026-02-08 03:07:01.533374 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-08 03:07:01.533379 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-08 03:07:01.533384 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-08 03:07:01.533389 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-08 03:07:01.533394 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-08 03:07:01.533398 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-08 03:07:01.533403 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-08 03:07:01.533408 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-08 03:07:01.533413 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-08 03:07:01.533418 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-08 03:07:01.533423 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-08 03:07:01.533428 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-08 03:07:01.533433 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-08 03:07:01.533438 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-08 03:07:01.533443 | orchestrator | 2026-02-08 03:07:01.533453 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-08 03:07:19.116586 | orchestrator | Sunday 08 February 2026 03:07:01 +0000 (0:00:01.303) 0:00:24.757 ******* 2026-02-08 03:07:19.116706 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:07:19.116722 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:07:19.116734 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:07:19.116744 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:07:19.116755 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:07:19.116765 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:07:19.116776 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:07:19.116786 | orchestrator | 2026-02-08 03:07:19.116815 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-08 03:07:19.116822 | orchestrator | Sunday 08 February 2026 03:07:02 +0000 (0:00:00.659) 0:00:25.416 ******* 2026-02-08 03:07:19.116830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-5, testbed-node-2, testbed-node-4, testbed-node-3 2026-02-08 03:07:19.116838 | orchestrator | 2026-02-08 03:07:19.116844 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-08 03:07:19.116850 | orchestrator | Sunday 08 February 2026 03:07:06 +0000 (0:00:04.589) 0:00:30.006 ******* 2026-02-08 03:07:19.116857 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.116866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.116872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.116878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.116884 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.116902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.116908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.116914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.116919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.116925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.116936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.116954 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.116966 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.116972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.116978 | orchestrator | 2026-02-08 03:07:19.116984 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-08 03:07:19.116990 | orchestrator | Sunday 08 February 2026 03:07:12 +0000 (0:00:06.088) 0:00:36.095 ******* 2026-02-08 03:07:19.116996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.117002 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.117008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.117014 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.117020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.117029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.117035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.117041 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.117047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-08 03:07:19.117053 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.117059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.117068 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:19.117081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:25.669091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-08 03:07:25.669347 | orchestrator | 2026-02-08 03:07:25.669385 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-08 03:07:25.669410 | orchestrator | Sunday 08 February 2026 03:07:19 +0000 (0:00:06.244) 0:00:42.339 ******* 2026-02-08 03:07:25.669433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:07:25.669453 | orchestrator | 2026-02-08 03:07:25.669472 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-08 03:07:25.669493 | orchestrator | Sunday 08 February 2026 03:07:20 +0000 (0:00:01.330) 0:00:43.670 ******* 2026-02-08 03:07:25.669512 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:25.669533 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:07:25.669552 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:07:25.669568 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:07:25.669587 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:07:25.669608 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:07:25.669627 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:07:25.669644 | orchestrator | 2026-02-08 03:07:25.669658 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-08 03:07:25.669672 | orchestrator | Sunday 08 February 2026 03:07:21 +0000 (0:00:01.240) 0:00:44.910 ******* 2026-02-08 03:07:25.669686 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-08 03:07:25.669700 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-08 03:07:25.669712 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-08 03:07:25.669727 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-08 03:07:25.669739 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:07:25.669753 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-08 03:07:25.669767 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-08 03:07:25.669778 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-08 03:07:25.669789 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-08 03:07:25.669800 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:07:25.669811 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-08 03:07:25.669841 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-08 03:07:25.669852 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-08 03:07:25.669872 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-08 03:07:25.669921 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:07:25.669941 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-08 03:07:25.669961 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-08 03:07:25.669979 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-08 03:07:25.669996 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-08 03:07:25.670008 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:07:25.670077 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-08 03:07:25.670091 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-08 03:07:25.670102 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-08 03:07:25.670113 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-08 03:07:25.670123 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-08 03:07:25.670134 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-08 03:07:25.670145 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-08 03:07:25.670156 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-08 03:07:25.670167 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:07:25.670177 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:07:25.670192 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-08 03:07:25.670235 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-08 03:07:25.670256 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-08 03:07:25.670276 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-08 03:07:25.670294 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:07:25.670308 | orchestrator | 2026-02-08 03:07:25.670319 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-08 03:07:25.670351 | orchestrator | Sunday 08 February 2026 03:07:23 +0000 (0:00:02.133) 0:00:47.044 ******* 2026-02-08 03:07:25.670363 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:07:25.670381 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:07:25.670399 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:07:25.670418 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:07:25.670436 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:07:25.670450 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:07:25.670461 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:07:25.670476 | orchestrator | 2026-02-08 03:07:25.670494 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-08 03:07:25.670513 | orchestrator | Sunday 08 February 2026 03:07:24 +0000 (0:00:00.665) 0:00:47.709 ******* 2026-02-08 03:07:25.670532 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:07:25.670551 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:07:25.670569 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:07:25.670584 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:07:25.670595 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:07:25.670611 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:07:25.670630 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:07:25.670649 | orchestrator | 2026-02-08 03:07:25.670668 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:07:25.670682 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 03:07:25.670695 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 03:07:25.670720 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 03:07:25.670731 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 03:07:25.670742 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 03:07:25.670752 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 03:07:25.670763 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 03:07:25.670774 | orchestrator | 2026-02-08 03:07:25.670785 | orchestrator | 2026-02-08 03:07:25.670796 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:07:25.670806 | orchestrator | Sunday 08 February 2026 03:07:25 +0000 (0:00:00.757) 0:00:48.467 ******* 2026-02-08 03:07:25.670834 | orchestrator | =============================================================================== 2026-02-08 03:07:25.670845 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.24s 2026-02-08 03:07:25.670856 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.09s 2026-02-08 03:07:25.670867 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.59s 2026-02-08 03:07:25.670877 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.41s 2026-02-08 03:07:25.670888 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.13s 2026-02-08 03:07:25.670899 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.10s 2026-02-08 03:07:25.670909 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.95s 2026-02-08 03:07:25.670920 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.74s 2026-02-08 03:07:25.670931 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2026-02-08 03:07:25.670942 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2026-02-08 03:07:25.670952 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.63s 2026-02-08 03:07:25.670963 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.33s 2026-02-08 03:07:25.670974 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.33s 2026-02-08 03:07:25.670984 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.32s 2026-02-08 03:07:25.670995 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.30s 2026-02-08 03:07:25.671006 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.24s 2026-02-08 03:07:25.671017 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.18s 2026-02-08 03:07:25.671027 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-02-08 03:07:25.671038 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2026-02-08 03:07:25.671049 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2026-02-08 03:07:25.993077 | orchestrator | + osism apply wireguard 2026-02-08 03:07:37.996568 | orchestrator | 2026-02-08 03:07:37 | INFO  | Task 2d40b8f7-6d86-46b1-ba76-fef3075e90c1 (wireguard) was prepared for execution. 2026-02-08 03:07:37.996736 | orchestrator | 2026-02-08 03:07:37 | INFO  | It takes a moment until task 2d40b8f7-6d86-46b1-ba76-fef3075e90c1 (wireguard) has been started and output is visible here. 2026-02-08 03:07:58.804114 | orchestrator | 2026-02-08 03:07:58.804250 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-08 03:07:58.804293 | orchestrator | 2026-02-08 03:07:58.804306 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-08 03:07:58.804321 | orchestrator | Sunday 08 February 2026 03:07:42 +0000 (0:00:00.222) 0:00:00.222 ******* 2026-02-08 03:07:58.804339 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:58.804360 | orchestrator | 2026-02-08 03:07:58.804378 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-08 03:07:58.804395 | orchestrator | Sunday 08 February 2026 03:07:43 +0000 (0:00:01.560) 0:00:01.783 ******* 2026-02-08 03:07:58.804413 | orchestrator | changed: [testbed-manager] 2026-02-08 03:07:58.804436 | orchestrator | 2026-02-08 03:07:58.804456 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-08 03:07:58.804476 | orchestrator | Sunday 08 February 2026 03:07:50 +0000 (0:00:06.933) 0:00:08.717 ******* 2026-02-08 03:07:58.804494 | orchestrator | changed: [testbed-manager] 2026-02-08 03:07:58.804514 | orchestrator | 2026-02-08 03:07:58.804532 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-08 03:07:58.804551 | orchestrator | Sunday 08 February 2026 03:07:51 +0000 (0:00:00.600) 0:00:09.317 ******* 2026-02-08 03:07:58.804570 | orchestrator | changed: [testbed-manager] 2026-02-08 03:07:58.804585 | orchestrator | 2026-02-08 03:07:58.804596 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-08 03:07:58.804607 | orchestrator | Sunday 08 February 2026 03:07:51 +0000 (0:00:00.454) 0:00:09.772 ******* 2026-02-08 03:07:58.804617 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:58.804628 | orchestrator | 2026-02-08 03:07:58.804639 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-08 03:07:58.804650 | orchestrator | Sunday 08 February 2026 03:07:52 +0000 (0:00:00.699) 0:00:10.472 ******* 2026-02-08 03:07:58.804663 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:58.804678 | orchestrator | 2026-02-08 03:07:58.804698 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-08 03:07:58.804716 | orchestrator | Sunday 08 February 2026 03:07:52 +0000 (0:00:00.429) 0:00:10.901 ******* 2026-02-08 03:07:58.804735 | orchestrator | ok: [testbed-manager] 2026-02-08 03:07:58.804755 | orchestrator | 2026-02-08 03:07:58.804774 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-08 03:07:58.804793 | orchestrator | Sunday 08 February 2026 03:07:53 +0000 (0:00:00.426) 0:00:11.327 ******* 2026-02-08 03:07:58.804806 | orchestrator | changed: [testbed-manager] 2026-02-08 03:07:58.804818 | orchestrator | 2026-02-08 03:07:58.804831 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-08 03:07:58.804845 | orchestrator | Sunday 08 February 2026 03:07:54 +0000 (0:00:01.218) 0:00:12.546 ******* 2026-02-08 03:07:58.804858 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-08 03:07:58.804872 | orchestrator | changed: [testbed-manager] 2026-02-08 03:07:58.804885 | orchestrator | 2026-02-08 03:07:58.804898 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-08 03:07:58.804911 | orchestrator | Sunday 08 February 2026 03:07:55 +0000 (0:00:00.925) 0:00:13.471 ******* 2026-02-08 03:07:58.804925 | orchestrator | changed: [testbed-manager] 2026-02-08 03:07:58.804939 | orchestrator | 2026-02-08 03:07:58.804952 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-08 03:07:58.804965 | orchestrator | Sunday 08 February 2026 03:07:57 +0000 (0:00:01.766) 0:00:15.238 ******* 2026-02-08 03:07:58.804978 | orchestrator | changed: [testbed-manager] 2026-02-08 03:07:58.804992 | orchestrator | 2026-02-08 03:07:58.805005 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:07:58.805018 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:07:58.805030 | orchestrator | 2026-02-08 03:07:58.805041 | orchestrator | 2026-02-08 03:07:58.805053 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:07:58.805076 | orchestrator | Sunday 08 February 2026 03:07:58 +0000 (0:00:01.008) 0:00:16.246 ******* 2026-02-08 03:07:58.805087 | orchestrator | =============================================================================== 2026-02-08 03:07:58.805098 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.93s 2026-02-08 03:07:58.805109 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.77s 2026-02-08 03:07:58.805119 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.56s 2026-02-08 03:07:58.805130 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2026-02-08 03:07:58.805140 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.01s 2026-02-08 03:07:58.805151 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2026-02-08 03:07:58.805162 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2026-02-08 03:07:58.805172 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.60s 2026-02-08 03:07:58.805183 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-02-08 03:07:58.805193 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.43s 2026-02-08 03:07:58.805204 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-02-08 03:07:59.124280 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-08 03:07:59.157137 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-08 03:07:59.157278 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-08 03:07:59.237452 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 173 0 --:--:-- --:--:-- --:--:-- 175 2026-02-08 03:07:59.254127 | orchestrator | + osism apply --environment custom workarounds 2026-02-08 03:08:01.345614 | orchestrator | 2026-02-08 03:08:01 | INFO  | Trying to run play workarounds in environment custom 2026-02-08 03:08:11.512155 | orchestrator | 2026-02-08 03:08:11 | INFO  | Task 758c613e-76b4-413f-90a0-58b542e2d6ac (workarounds) was prepared for execution. 2026-02-08 03:08:11.512381 | orchestrator | 2026-02-08 03:08:11 | INFO  | It takes a moment until task 758c613e-76b4-413f-90a0-58b542e2d6ac (workarounds) has been started and output is visible here. 2026-02-08 03:08:37.041178 | orchestrator | 2026-02-08 03:08:37.041330 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 03:08:37.041351 | orchestrator | 2026-02-08 03:08:37.041371 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-08 03:08:37.041391 | orchestrator | Sunday 08 February 2026 03:08:15 +0000 (0:00:00.127) 0:00:00.127 ******* 2026-02-08 03:08:37.041410 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-08 03:08:37.041430 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-08 03:08:37.041447 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-08 03:08:37.041488 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-08 03:08:37.041510 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-08 03:08:37.041529 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-08 03:08:37.041548 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-08 03:08:37.041560 | orchestrator | 2026-02-08 03:08:37.041571 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-08 03:08:37.041582 | orchestrator | 2026-02-08 03:08:37.041593 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-08 03:08:37.041604 | orchestrator | Sunday 08 February 2026 03:08:16 +0000 (0:00:00.797) 0:00:00.924 ******* 2026-02-08 03:08:37.041616 | orchestrator | ok: [testbed-manager] 2026-02-08 03:08:37.041654 | orchestrator | 2026-02-08 03:08:37.041666 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-08 03:08:37.041676 | orchestrator | 2026-02-08 03:08:37.041688 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-08 03:08:37.041699 | orchestrator | Sunday 08 February 2026 03:08:19 +0000 (0:00:02.563) 0:00:03.487 ******* 2026-02-08 03:08:37.041710 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:08:37.041726 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:08:37.041746 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:08:37.041764 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:08:37.041782 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:08:37.041799 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:08:37.041818 | orchestrator | 2026-02-08 03:08:37.041839 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-08 03:08:37.041854 | orchestrator | 2026-02-08 03:08:37.041872 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-08 03:08:37.041897 | orchestrator | Sunday 08 February 2026 03:08:21 +0000 (0:00:01.854) 0:00:05.342 ******* 2026-02-08 03:08:37.041916 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-08 03:08:37.041935 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-08 03:08:37.041952 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-08 03:08:37.041969 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-08 03:08:37.041988 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-08 03:08:37.042006 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-08 03:08:37.042256 | orchestrator | 2026-02-08 03:08:37.042282 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-08 03:08:37.042301 | orchestrator | Sunday 08 February 2026 03:08:22 +0000 (0:00:01.490) 0:00:06.832 ******* 2026-02-08 03:08:37.042319 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:08:37.042338 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:08:37.042357 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:08:37.042376 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:08:37.042392 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:08:37.042409 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:08:37.042424 | orchestrator | 2026-02-08 03:08:37.042441 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-08 03:08:37.042458 | orchestrator | Sunday 08 February 2026 03:08:26 +0000 (0:00:03.593) 0:00:10.426 ******* 2026-02-08 03:08:37.042475 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:08:37.042492 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:08:37.042510 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:08:37.042526 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:08:37.042545 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:08:37.042562 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:08:37.042579 | orchestrator | 2026-02-08 03:08:37.042597 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-08 03:08:37.042614 | orchestrator | 2026-02-08 03:08:37.042630 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-08 03:08:37.042649 | orchestrator | Sunday 08 February 2026 03:08:26 +0000 (0:00:00.719) 0:00:11.145 ******* 2026-02-08 03:08:37.042664 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:08:37.042680 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:08:37.042696 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:08:37.042712 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:08:37.042730 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:08:37.042750 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:08:37.042792 | orchestrator | changed: [testbed-manager] 2026-02-08 03:08:37.042810 | orchestrator | 2026-02-08 03:08:37.042829 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-08 03:08:37.042848 | orchestrator | Sunday 08 February 2026 03:08:28 +0000 (0:00:01.649) 0:00:12.795 ******* 2026-02-08 03:08:37.042866 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:08:37.042886 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:08:37.042906 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:08:37.042925 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:08:37.042944 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:08:37.042962 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:08:37.043010 | orchestrator | changed: [testbed-manager] 2026-02-08 03:08:37.043030 | orchestrator | 2026-02-08 03:08:37.043044 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-08 03:08:37.043055 | orchestrator | Sunday 08 February 2026 03:08:30 +0000 (0:00:01.645) 0:00:14.440 ******* 2026-02-08 03:08:37.043066 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:08:37.043078 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:08:37.043089 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:08:37.043099 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:08:37.043110 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:08:37.043121 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:08:37.043131 | orchestrator | ok: [testbed-manager] 2026-02-08 03:08:37.043142 | orchestrator | 2026-02-08 03:08:37.043153 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-08 03:08:37.043164 | orchestrator | Sunday 08 February 2026 03:08:31 +0000 (0:00:01.588) 0:00:16.028 ******* 2026-02-08 03:08:37.043175 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:08:37.043185 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:08:37.043196 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:08:37.043270 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:08:37.043285 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:08:37.043296 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:08:37.043307 | orchestrator | changed: [testbed-manager] 2026-02-08 03:08:37.043318 | orchestrator | 2026-02-08 03:08:37.043328 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-08 03:08:37.043339 | orchestrator | Sunday 08 February 2026 03:08:33 +0000 (0:00:01.938) 0:00:17.967 ******* 2026-02-08 03:08:37.043350 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:08:37.043361 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:08:37.043372 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:08:37.043383 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:08:37.043394 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:08:37.043404 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:08:37.043415 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:08:37.043426 | orchestrator | 2026-02-08 03:08:37.043437 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-08 03:08:37.043448 | orchestrator | 2026-02-08 03:08:37.043459 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-08 03:08:37.043469 | orchestrator | Sunday 08 February 2026 03:08:34 +0000 (0:00:00.628) 0:00:18.595 ******* 2026-02-08 03:08:37.043480 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:08:37.043491 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:08:37.043502 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:08:37.043512 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:08:37.043523 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:08:37.043545 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:08:37.043556 | orchestrator | ok: [testbed-manager] 2026-02-08 03:08:37.043567 | orchestrator | 2026-02-08 03:08:37.043578 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:08:37.043590 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:08:37.043603 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:08:37.043625 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:08:37.043637 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:08:37.043647 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:08:37.043658 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:08:37.043669 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:08:37.043680 | orchestrator | 2026-02-08 03:08:37.043691 | orchestrator | 2026-02-08 03:08:37.043702 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:08:37.043713 | orchestrator | Sunday 08 February 2026 03:08:37 +0000 (0:00:02.753) 0:00:21.349 ******* 2026-02-08 03:08:37.043723 | orchestrator | =============================================================================== 2026-02-08 03:08:37.043734 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.59s 2026-02-08 03:08:37.043745 | orchestrator | Install python3-docker -------------------------------------------------- 2.75s 2026-02-08 03:08:37.043756 | orchestrator | Apply netplan configuration --------------------------------------------- 2.56s 2026-02-08 03:08:37.043766 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.94s 2026-02-08 03:08:37.043777 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2026-02-08 03:08:37.043788 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2026-02-08 03:08:37.043799 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.65s 2026-02-08 03:08:37.043809 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.59s 2026-02-08 03:08:37.043820 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2026-02-08 03:08:37.043831 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.80s 2026-02-08 03:08:37.043842 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2026-02-08 03:08:37.043862 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2026-02-08 03:08:37.752743 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-08 03:08:49.957697 | orchestrator | 2026-02-08 03:08:49 | INFO  | Task 82f1c6b2-1d47-4157-8cfe-85d627b88fce (reboot) was prepared for execution. 2026-02-08 03:08:49.957790 | orchestrator | 2026-02-08 03:08:49 | INFO  | It takes a moment until task 82f1c6b2-1d47-4157-8cfe-85d627b88fce (reboot) has been started and output is visible here. 2026-02-08 03:09:00.387057 | orchestrator | 2026-02-08 03:09:00.387197 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-08 03:09:00.387255 | orchestrator | 2026-02-08 03:09:00.387263 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-08 03:09:00.387270 | orchestrator | Sunday 08 February 2026 03:08:54 +0000 (0:00:00.211) 0:00:00.211 ******* 2026-02-08 03:09:00.387277 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:09:00.387284 | orchestrator | 2026-02-08 03:09:00.387290 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-08 03:09:00.387296 | orchestrator | Sunday 08 February 2026 03:08:54 +0000 (0:00:00.106) 0:00:00.318 ******* 2026-02-08 03:09:00.387303 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:09:00.387309 | orchestrator | 2026-02-08 03:09:00.387315 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-08 03:09:00.387352 | orchestrator | Sunday 08 February 2026 03:08:55 +0000 (0:00:00.974) 0:00:01.292 ******* 2026-02-08 03:09:00.387357 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:09:00.387363 | orchestrator | 2026-02-08 03:09:00.387369 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-08 03:09:00.387375 | orchestrator | 2026-02-08 03:09:00.387381 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-08 03:09:00.387387 | orchestrator | Sunday 08 February 2026 03:08:55 +0000 (0:00:00.113) 0:00:01.405 ******* 2026-02-08 03:09:00.387393 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:09:00.387398 | orchestrator | 2026-02-08 03:09:00.387405 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-08 03:09:00.387410 | orchestrator | Sunday 08 February 2026 03:08:55 +0000 (0:00:00.111) 0:00:01.517 ******* 2026-02-08 03:09:00.387416 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:09:00.387422 | orchestrator | 2026-02-08 03:09:00.387428 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-08 03:09:00.387452 | orchestrator | Sunday 08 February 2026 03:08:56 +0000 (0:00:00.686) 0:00:02.204 ******* 2026-02-08 03:09:00.387458 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:09:00.387464 | orchestrator | 2026-02-08 03:09:00.387470 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-08 03:09:00.387476 | orchestrator | 2026-02-08 03:09:00.387482 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-08 03:09:00.387489 | orchestrator | Sunday 08 February 2026 03:08:56 +0000 (0:00:00.124) 0:00:02.328 ******* 2026-02-08 03:09:00.387495 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:09:00.387501 | orchestrator | 2026-02-08 03:09:00.387506 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-08 03:09:00.387513 | orchestrator | Sunday 08 February 2026 03:08:56 +0000 (0:00:00.228) 0:00:02.556 ******* 2026-02-08 03:09:00.387519 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:09:00.387525 | orchestrator | 2026-02-08 03:09:00.387532 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-08 03:09:00.387538 | orchestrator | Sunday 08 February 2026 03:08:57 +0000 (0:00:00.686) 0:00:03.242 ******* 2026-02-08 03:09:00.387545 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:09:00.387551 | orchestrator | 2026-02-08 03:09:00.387556 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-08 03:09:00.387562 | orchestrator | 2026-02-08 03:09:00.387568 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-08 03:09:00.387574 | orchestrator | Sunday 08 February 2026 03:08:57 +0000 (0:00:00.132) 0:00:03.375 ******* 2026-02-08 03:09:00.387580 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:09:00.387586 | orchestrator | 2026-02-08 03:09:00.387592 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-08 03:09:00.387598 | orchestrator | Sunday 08 February 2026 03:08:57 +0000 (0:00:00.114) 0:00:03.489 ******* 2026-02-08 03:09:00.387604 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:09:00.387610 | orchestrator | 2026-02-08 03:09:00.387615 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-08 03:09:00.387621 | orchestrator | Sunday 08 February 2026 03:08:58 +0000 (0:00:00.645) 0:00:04.135 ******* 2026-02-08 03:09:00.387627 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:09:00.387633 | orchestrator | 2026-02-08 03:09:00.387639 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-08 03:09:00.387644 | orchestrator | 2026-02-08 03:09:00.387650 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-08 03:09:00.387655 | orchestrator | Sunday 08 February 2026 03:08:58 +0000 (0:00:00.127) 0:00:04.262 ******* 2026-02-08 03:09:00.387662 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:09:00.387668 | orchestrator | 2026-02-08 03:09:00.387674 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-08 03:09:00.387686 | orchestrator | Sunday 08 February 2026 03:08:58 +0000 (0:00:00.099) 0:00:04.362 ******* 2026-02-08 03:09:00.387692 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:09:00.387698 | orchestrator | 2026-02-08 03:09:00.387704 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-08 03:09:00.387710 | orchestrator | Sunday 08 February 2026 03:08:59 +0000 (0:00:00.694) 0:00:05.056 ******* 2026-02-08 03:09:00.387716 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:09:00.387723 | orchestrator | 2026-02-08 03:09:00.387729 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-08 03:09:00.387735 | orchestrator | 2026-02-08 03:09:00.387741 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-08 03:09:00.387747 | orchestrator | Sunday 08 February 2026 03:08:59 +0000 (0:00:00.116) 0:00:05.173 ******* 2026-02-08 03:09:00.387753 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:09:00.387759 | orchestrator | 2026-02-08 03:09:00.387764 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-08 03:09:00.387770 | orchestrator | Sunday 08 February 2026 03:08:59 +0000 (0:00:00.107) 0:00:05.280 ******* 2026-02-08 03:09:00.387776 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:09:00.387782 | orchestrator | 2026-02-08 03:09:00.387788 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-08 03:09:00.387794 | orchestrator | Sunday 08 February 2026 03:08:59 +0000 (0:00:00.636) 0:00:05.916 ******* 2026-02-08 03:09:00.387822 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:09:00.387829 | orchestrator | 2026-02-08 03:09:00.387835 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:09:00.387842 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:09:00.387851 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:09:00.387857 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:09:00.387863 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:09:00.387869 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:09:00.387875 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:09:00.387881 | orchestrator | 2026-02-08 03:09:00.387887 | orchestrator | 2026-02-08 03:09:00.387892 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:09:00.387898 | orchestrator | Sunday 08 February 2026 03:09:00 +0000 (0:00:00.040) 0:00:05.957 ******* 2026-02-08 03:09:00.387910 | orchestrator | =============================================================================== 2026-02-08 03:09:00.387917 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.32s 2026-02-08 03:09:00.387923 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.77s 2026-02-08 03:09:00.387929 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2026-02-08 03:09:00.771268 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-08 03:09:12.923635 | orchestrator | 2026-02-08 03:09:12 | INFO  | Task 7914d091-6def-4457-b1f1-7a39c5482a62 (wait-for-connection) was prepared for execution. 2026-02-08 03:09:12.923748 | orchestrator | 2026-02-08 03:09:12 | INFO  | It takes a moment until task 7914d091-6def-4457-b1f1-7a39c5482a62 (wait-for-connection) has been started and output is visible here. 2026-02-08 03:09:29.316938 | orchestrator | 2026-02-08 03:09:29.317040 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-08 03:09:29.317050 | orchestrator | 2026-02-08 03:09:29.317055 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-08 03:09:29.317061 | orchestrator | Sunday 08 February 2026 03:09:17 +0000 (0:00:00.236) 0:00:00.236 ******* 2026-02-08 03:09:29.317066 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:09:29.317071 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:09:29.317076 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:09:29.317080 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:09:29.317085 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:09:29.317089 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:09:29.317093 | orchestrator | 2026-02-08 03:09:29.317098 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:09:29.317104 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:09:29.317110 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:09:29.317114 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:09:29.317119 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:09:29.317123 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:09:29.317127 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:09:29.317132 | orchestrator | 2026-02-08 03:09:29.317137 | orchestrator | 2026-02-08 03:09:29.317141 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:09:29.317145 | orchestrator | Sunday 08 February 2026 03:09:28 +0000 (0:00:11.601) 0:00:11.838 ******* 2026-02-08 03:09:29.317150 | orchestrator | =============================================================================== 2026-02-08 03:09:29.317154 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.60s 2026-02-08 03:09:29.662345 | orchestrator | + osism apply hddtemp 2026-02-08 03:09:41.790352 | orchestrator | 2026-02-08 03:09:41 | INFO  | Task d1c51404-dc1c-45e6-aa78-60d972603bff (hddtemp) was prepared for execution. 2026-02-08 03:09:41.790461 | orchestrator | 2026-02-08 03:09:41 | INFO  | It takes a moment until task d1c51404-dc1c-45e6-aa78-60d972603bff (hddtemp) has been started and output is visible here. 2026-02-08 03:10:08.597970 | orchestrator | 2026-02-08 03:10:08.598132 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-08 03:10:08.598149 | orchestrator | 2026-02-08 03:10:08.598158 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-08 03:10:08.598166 | orchestrator | Sunday 08 February 2026 03:09:46 +0000 (0:00:00.254) 0:00:00.254 ******* 2026-02-08 03:10:08.598174 | orchestrator | ok: [testbed-manager] 2026-02-08 03:10:08.598182 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:10:08.598190 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:10:08.598197 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:10:08.598243 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:10:08.598253 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:10:08.598260 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:10:08.598267 | orchestrator | 2026-02-08 03:10:08.598275 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-08 03:10:08.598283 | orchestrator | Sunday 08 February 2026 03:09:46 +0000 (0:00:00.745) 0:00:01.000 ******* 2026-02-08 03:10:08.598292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:10:08.598323 | orchestrator | 2026-02-08 03:10:08.598331 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-08 03:10:08.598339 | orchestrator | Sunday 08 February 2026 03:09:48 +0000 (0:00:01.251) 0:00:02.252 ******* 2026-02-08 03:10:08.598346 | orchestrator | ok: [testbed-manager] 2026-02-08 03:10:08.598354 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:10:08.598361 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:10:08.598368 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:10:08.598376 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:10:08.598383 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:10:08.598390 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:10:08.598398 | orchestrator | 2026-02-08 03:10:08.598405 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-08 03:10:08.598426 | orchestrator | Sunday 08 February 2026 03:09:49 +0000 (0:00:01.816) 0:00:04.068 ******* 2026-02-08 03:10:08.598434 | orchestrator | changed: [testbed-manager] 2026-02-08 03:10:08.598442 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:10:08.598449 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:10:08.598456 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:10:08.598463 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:10:08.598471 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:10:08.598478 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:10:08.598485 | orchestrator | 2026-02-08 03:10:08.598492 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-08 03:10:08.598499 | orchestrator | Sunday 08 February 2026 03:09:51 +0000 (0:00:01.164) 0:00:05.233 ******* 2026-02-08 03:10:08.598507 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:10:08.598516 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:10:08.598525 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:10:08.598534 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:10:08.598542 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:10:08.598551 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:10:08.598560 | orchestrator | ok: [testbed-manager] 2026-02-08 03:10:08.598569 | orchestrator | 2026-02-08 03:10:08.598577 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-08 03:10:08.598586 | orchestrator | Sunday 08 February 2026 03:09:52 +0000 (0:00:01.181) 0:00:06.414 ******* 2026-02-08 03:10:08.598595 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:10:08.598603 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:10:08.598612 | orchestrator | changed: [testbed-manager] 2026-02-08 03:10:08.598622 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:10:08.598630 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:10:08.598639 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:10:08.598648 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:10:08.598656 | orchestrator | 2026-02-08 03:10:08.598666 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-08 03:10:08.598678 | orchestrator | Sunday 08 February 2026 03:09:53 +0000 (0:00:00.941) 0:00:07.355 ******* 2026-02-08 03:10:08.598691 | orchestrator | changed: [testbed-manager] 2026-02-08 03:10:08.598702 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:10:08.598715 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:10:08.598728 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:10:08.598743 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:10:08.598761 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:10:08.598772 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:10:08.598784 | orchestrator | 2026-02-08 03:10:08.598796 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-08 03:10:08.598808 | orchestrator | Sunday 08 February 2026 03:10:04 +0000 (0:00:11.687) 0:00:19.043 ******* 2026-02-08 03:10:08.598819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:10:08.598843 | orchestrator | 2026-02-08 03:10:08.598856 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-08 03:10:08.598867 | orchestrator | Sunday 08 February 2026 03:10:06 +0000 (0:00:01.247) 0:00:20.290 ******* 2026-02-08 03:10:08.598879 | orchestrator | changed: [testbed-manager] 2026-02-08 03:10:08.598889 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:10:08.598901 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:10:08.598913 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:10:08.598925 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:10:08.598938 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:10:08.598949 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:10:08.598962 | orchestrator | 2026-02-08 03:10:08.598974 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:10:08.598986 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:10:08.599021 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:10:08.599035 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:10:08.599048 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:10:08.599060 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:10:08.599072 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:10:08.599084 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:10:08.599097 | orchestrator | 2026-02-08 03:10:08.599110 | orchestrator | 2026-02-08 03:10:08.599121 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:10:08.599134 | orchestrator | Sunday 08 February 2026 03:10:08 +0000 (0:00:01.968) 0:00:22.259 ******* 2026-02-08 03:10:08.599146 | orchestrator | =============================================================================== 2026-02-08 03:10:08.599159 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.69s 2026-02-08 03:10:08.599171 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.97s 2026-02-08 03:10:08.599184 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.82s 2026-02-08 03:10:08.599231 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2026-02-08 03:10:08.599246 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.25s 2026-02-08 03:10:08.599259 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.18s 2026-02-08 03:10:08.599270 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.16s 2026-02-08 03:10:08.599281 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.94s 2026-02-08 03:10:08.599293 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2026-02-08 03:10:08.949983 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-08 03:10:08.995854 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 03:10:08.995937 | orchestrator | + sudo systemctl restart manager.service 2026-02-08 03:10:23.143172 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-08 03:10:23.143338 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-08 03:10:23.143353 | orchestrator | + local max_attempts=60 2026-02-08 03:10:23.143362 | orchestrator | + local name=ceph-ansible 2026-02-08 03:10:23.143371 | orchestrator | + local attempt_num=1 2026-02-08 03:10:23.143380 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:10:23.177813 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:10:23.177933 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:10:23.177956 | orchestrator | + sleep 5 2026-02-08 03:10:28.184032 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:10:28.218799 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:10:28.218900 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:10:28.218914 | orchestrator | + sleep 5 2026-02-08 03:10:33.222082 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:10:33.273772 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:10:33.273876 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:10:33.273891 | orchestrator | + sleep 5 2026-02-08 03:10:38.280368 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:10:38.310534 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:10:38.310633 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:10:38.310649 | orchestrator | + sleep 5 2026-02-08 03:10:43.315177 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:10:43.360465 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:10:43.360547 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:10:43.360568 | orchestrator | + sleep 5 2026-02-08 03:10:48.366444 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:10:48.407821 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:10:48.407907 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:10:48.407917 | orchestrator | + sleep 5 2026-02-08 03:10:53.412977 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:10:53.460001 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:10:53.460095 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:10:53.460109 | orchestrator | + sleep 5 2026-02-08 03:10:58.467288 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:10:58.510316 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-08 03:10:58.510421 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:10:58.510440 | orchestrator | + sleep 5 2026-02-08 03:11:03.518703 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:11:03.557457 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-08 03:11:03.557529 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:11:03.557536 | orchestrator | + sleep 5 2026-02-08 03:11:08.560777 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:11:08.605972 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-08 03:11:08.606105 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:11:08.606117 | orchestrator | + sleep 5 2026-02-08 03:11:13.611330 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:11:13.647799 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-08 03:11:13.647915 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:11:13.647930 | orchestrator | + sleep 5 2026-02-08 03:11:18.652620 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:11:18.689335 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-08 03:11:18.689497 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:11:18.689525 | orchestrator | + sleep 5 2026-02-08 03:11:23.692693 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:11:23.725893 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-08 03:11:23.726080 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-08 03:11:23.726095 | orchestrator | + sleep 5 2026-02-08 03:11:28.730844 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-08 03:11:28.765823 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:11:28.765896 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-08 03:11:28.765903 | orchestrator | + local max_attempts=60 2026-02-08 03:11:28.765909 | orchestrator | + local name=kolla-ansible 2026-02-08 03:11:28.765914 | orchestrator | + local attempt_num=1 2026-02-08 03:11:28.766525 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-08 03:11:28.807824 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:11:28.807924 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-08 03:11:28.807964 | orchestrator | + local max_attempts=60 2026-02-08 03:11:28.807975 | orchestrator | + local name=osism-ansible 2026-02-08 03:11:28.807984 | orchestrator | + local attempt_num=1 2026-02-08 03:11:28.807994 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-08 03:11:28.844753 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-08 03:11:28.844837 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-08 03:11:28.844848 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-08 03:11:29.023928 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-08 03:11:29.197456 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-08 03:11:29.383838 | orchestrator | ARA in osism-ansible already disabled. 2026-02-08 03:11:29.575133 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-08 03:11:29.576032 | orchestrator | + osism apply gather-facts 2026-02-08 03:11:41.919548 | orchestrator | 2026-02-08 03:11:41 | INFO  | Task 95fc2783-69b0-40e9-a597-ac7c57cd8787 (gather-facts) was prepared for execution. 2026-02-08 03:11:41.919681 | orchestrator | 2026-02-08 03:11:41 | INFO  | It takes a moment until task 95fc2783-69b0-40e9-a597-ac7c57cd8787 (gather-facts) has been started and output is visible here. 2026-02-08 03:11:54.974255 | orchestrator | 2026-02-08 03:11:54.974364 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-08 03:11:54.974380 | orchestrator | 2026-02-08 03:11:54.974391 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-08 03:11:54.974412 | orchestrator | Sunday 08 February 2026 03:11:46 +0000 (0:00:00.249) 0:00:00.249 ******* 2026-02-08 03:11:54.974419 | orchestrator | ok: [testbed-manager] 2026-02-08 03:11:54.974426 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:11:54.974431 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:11:54.974437 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:11:54.974442 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:11:54.974448 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:11:54.974453 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:11:54.974459 | orchestrator | 2026-02-08 03:11:54.974464 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-08 03:11:54.974470 | orchestrator | 2026-02-08 03:11:54.974475 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-08 03:11:54.974481 | orchestrator | Sunday 08 February 2026 03:11:53 +0000 (0:00:07.720) 0:00:07.970 ******* 2026-02-08 03:11:54.974487 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:11:54.974493 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:11:54.974498 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:11:54.974504 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:11:54.974509 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:11:54.974514 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:11:54.974520 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:11:54.974525 | orchestrator | 2026-02-08 03:11:54.974530 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:11:54.974536 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:11:54.974543 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:11:54.974548 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:11:54.974554 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:11:54.974559 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:11:54.974565 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:11:54.974588 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 03:11:54.974593 | orchestrator | 2026-02-08 03:11:54.974599 | orchestrator | 2026-02-08 03:11:54.974604 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:11:54.974610 | orchestrator | Sunday 08 February 2026 03:11:54 +0000 (0:00:00.549) 0:00:08.520 ******* 2026-02-08 03:11:54.974615 | orchestrator | =============================================================================== 2026-02-08 03:11:54.974620 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.72s 2026-02-08 03:11:54.974626 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-08 03:11:55.362742 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-08 03:11:55.380151 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-08 03:11:55.397144 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-08 03:11:55.416677 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-08 03:11:55.430769 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-08 03:11:55.443956 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-08 03:11:55.466864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-08 03:11:55.481756 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-08 03:11:55.500973 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-08 03:11:55.521072 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-08 03:11:55.540552 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-08 03:11:55.553085 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-08 03:11:55.566251 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-08 03:11:55.577683 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-08 03:11:55.588423 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-08 03:11:55.604800 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-08 03:11:55.617014 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-08 03:11:55.630319 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-08 03:11:55.646740 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-08 03:11:55.659543 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-08 03:11:55.673088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-08 03:11:55.692637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-08 03:11:55.708591 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-08 03:11:55.730361 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-08 03:11:55.919440 | orchestrator | ok: Runtime: 0:24:23.697833 2026-02-08 03:11:56.006053 | 2026-02-08 03:11:56.006152 | TASK [Deploy services] 2026-02-08 03:11:56.726299 | orchestrator | 2026-02-08 03:11:56.726453 | orchestrator | # DEPLOY SERVICES 2026-02-08 03:11:56.726468 | orchestrator | 2026-02-08 03:11:56.726477 | orchestrator | + set -e 2026-02-08 03:11:56.726486 | orchestrator | + echo 2026-02-08 03:11:56.726495 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-08 03:11:56.726505 | orchestrator | + echo 2026-02-08 03:11:56.726536 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 03:11:56.726549 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 03:11:56.726560 | orchestrator | ++ INTERACTIVE=false 2026-02-08 03:11:56.726568 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 03:11:56.726582 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 03:11:56.726589 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 03:11:56.726599 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 03:11:56.726606 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 03:11:56.726617 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 03:11:56.726624 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 03:11:56.726634 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 03:11:56.726641 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 03:11:56.726650 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 03:11:56.726657 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 03:11:56.726665 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 03:11:56.726673 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 03:11:56.726680 | orchestrator | ++ export ARA=false 2026-02-08 03:11:56.726687 | orchestrator | ++ ARA=false 2026-02-08 03:11:56.726694 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 03:11:56.726701 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 03:11:56.726708 | orchestrator | ++ export TEMPEST=false 2026-02-08 03:11:56.726715 | orchestrator | ++ TEMPEST=false 2026-02-08 03:11:56.726722 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 03:11:56.726729 | orchestrator | ++ IS_ZUUL=true 2026-02-08 03:11:56.726736 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 03:11:56.726743 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 03:11:56.726749 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 03:11:56.726755 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 03:11:56.726762 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 03:11:56.726769 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 03:11:56.726776 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 03:11:56.726783 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 03:11:56.726790 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 03:11:56.726802 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 03:11:56.726810 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-08 03:11:56.736742 | orchestrator | + set -e 2026-02-08 03:11:56.736777 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 03:11:56.736786 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 03:11:56.736793 | orchestrator | ++ INTERACTIVE=false 2026-02-08 03:11:56.736801 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 03:11:56.736808 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 03:11:56.736816 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 03:11:56.736823 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 03:11:56.736830 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 03:11:56.736838 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 03:11:56.736845 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 03:11:56.736852 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 03:11:56.736860 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 03:11:56.736867 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 03:11:56.736875 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 03:11:56.736882 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 03:11:56.736889 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 03:11:56.736897 | orchestrator | ++ export ARA=false 2026-02-08 03:11:56.736904 | orchestrator | ++ ARA=false 2026-02-08 03:11:56.736911 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 03:11:56.736919 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 03:11:56.736926 | orchestrator | ++ export TEMPEST=false 2026-02-08 03:11:56.736934 | orchestrator | ++ TEMPEST=false 2026-02-08 03:11:56.736941 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 03:11:56.736948 | orchestrator | ++ IS_ZUUL=true 2026-02-08 03:11:56.736955 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 03:11:56.736963 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 03:11:56.736970 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 03:11:56.736977 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 03:11:56.736985 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 03:11:56.736992 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 03:11:56.736999 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 03:11:56.737006 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 03:11:56.737197 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 03:11:56.737223 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 03:11:56.737230 | orchestrator | + echo 2026-02-08 03:11:56.737267 | orchestrator | 2026-02-08 03:11:56.737276 | orchestrator | # PULL IMAGES 2026-02-08 03:11:56.737283 | orchestrator | + echo '# PULL IMAGES' 2026-02-08 03:11:56.738251 | orchestrator | 2026-02-08 03:11:56.738369 | orchestrator | + echo 2026-02-08 03:11:56.738728 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-08 03:11:56.786005 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 03:11:56.786106 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-08 03:11:58.783807 | orchestrator | 2026-02-08 03:11:58 | INFO  | Trying to run play pull-images in environment custom 2026-02-08 03:12:08.888939 | orchestrator | 2026-02-08 03:12:08 | INFO  | Task 7b9659aa-2450-4b06-8263-42fdd019f7aa (pull-images) was prepared for execution. 2026-02-08 03:12:08.889050 | orchestrator | 2026-02-08 03:12:08 | INFO  | Task 7b9659aa-2450-4b06-8263-42fdd019f7aa is running in background. No more output. Check ARA for logs. 2026-02-08 03:12:09.260489 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-08 03:12:21.450286 | orchestrator | 2026-02-08 03:12:21 | INFO  | Task ef25688a-075e-46ac-a1cc-29bb4858f854 (cgit) was prepared for execution. 2026-02-08 03:12:21.450393 | orchestrator | 2026-02-08 03:12:21 | INFO  | Task ef25688a-075e-46ac-a1cc-29bb4858f854 is running in background. No more output. Check ARA for logs. 2026-02-08 03:12:34.790048 | orchestrator | 2026-02-08 03:12:34 | INFO  | Task f7e2fc14-9acd-4895-8f72-3cd9a252829a (dotfiles) was prepared for execution. 2026-02-08 03:12:34.790149 | orchestrator | 2026-02-08 03:12:34 | INFO  | Task f7e2fc14-9acd-4895-8f72-3cd9a252829a is running in background. No more output. Check ARA for logs. 2026-02-08 03:12:47.602987 | orchestrator | 2026-02-08 03:12:47 | INFO  | Task 4d22a1ca-3b1e-411e-b9f6-6a8c702a77f9 (homer) was prepared for execution. 2026-02-08 03:12:47.603101 | orchestrator | 2026-02-08 03:12:47 | INFO  | Task 4d22a1ca-3b1e-411e-b9f6-6a8c702a77f9 is running in background. No more output. Check ARA for logs. 2026-02-08 03:13:00.297969 | orchestrator | 2026-02-08 03:13:00 | INFO  | Task 5e0f52a7-d429-4503-b7a1-90e2a6c2c55f (phpmyadmin) was prepared for execution. 2026-02-08 03:13:00.298088 | orchestrator | 2026-02-08 03:13:00 | INFO  | Task 5e0f52a7-d429-4503-b7a1-90e2a6c2c55f is running in background. No more output. Check ARA for logs. 2026-02-08 03:13:12.834266 | orchestrator | 2026-02-08 03:13:12 | INFO  | Task c891f929-5a99-4081-afa9-5d21da9f2478 (sosreport) was prepared for execution. 2026-02-08 03:13:12.834411 | orchestrator | 2026-02-08 03:13:12 | INFO  | Task c891f929-5a99-4081-afa9-5d21da9f2478 is running in background. No more output. Check ARA for logs. 2026-02-08 03:13:13.183017 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-08 03:13:13.191768 | orchestrator | + set -e 2026-02-08 03:13:13.191827 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 03:13:13.191836 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 03:13:13.191844 | orchestrator | ++ INTERACTIVE=false 2026-02-08 03:13:13.191852 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 03:13:13.191858 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 03:13:13.191864 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 03:13:13.191869 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 03:13:13.191875 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 03:13:13.191881 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 03:13:13.191887 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 03:13:13.191893 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 03:13:13.191899 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 03:13:13.191904 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 03:13:13.191910 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 03:13:13.191916 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 03:13:13.191922 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 03:13:13.191928 | orchestrator | ++ export ARA=false 2026-02-08 03:13:13.191933 | orchestrator | ++ ARA=false 2026-02-08 03:13:13.191939 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 03:13:13.191968 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 03:13:13.191974 | orchestrator | ++ export TEMPEST=false 2026-02-08 03:13:13.191980 | orchestrator | ++ TEMPEST=false 2026-02-08 03:13:13.191986 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 03:13:13.191991 | orchestrator | ++ IS_ZUUL=true 2026-02-08 03:13:13.192009 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 03:13:13.192019 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 03:13:13.192026 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 03:13:13.192031 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 03:13:13.192037 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 03:13:13.192043 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 03:13:13.192049 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 03:13:13.192055 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 03:13:13.192061 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 03:13:13.192067 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 03:13:13.193315 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-08 03:13:13.250638 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 03:13:13.250749 | orchestrator | + osism apply frr 2026-02-08 03:13:25.663484 | orchestrator | 2026-02-08 03:13:25 | INFO  | Task 7d8c6583-1a0a-47a8-93f3-1734e6854f2b (frr) was prepared for execution. 2026-02-08 03:13:25.663588 | orchestrator | 2026-02-08 03:13:25 | INFO  | It takes a moment until task 7d8c6583-1a0a-47a8-93f3-1734e6854f2b (frr) has been started and output is visible here. 2026-02-08 03:14:05.585766 | orchestrator | 2026-02-08 03:14:05.585876 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-08 03:14:05.585894 | orchestrator | 2026-02-08 03:14:05.585907 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-08 03:14:05.585927 | orchestrator | Sunday 08 February 2026 03:13:34 +0000 (0:00:00.320) 0:00:00.320 ******* 2026-02-08 03:14:05.585939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 03:14:05.585951 | orchestrator | 2026-02-08 03:14:05.585963 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-08 03:14:05.585974 | orchestrator | Sunday 08 February 2026 03:13:35 +0000 (0:00:00.279) 0:00:00.599 ******* 2026-02-08 03:14:05.585985 | orchestrator | changed: [testbed-manager] 2026-02-08 03:14:05.585998 | orchestrator | 2026-02-08 03:14:05.586009 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-08 03:14:05.586128 | orchestrator | Sunday 08 February 2026 03:13:37 +0000 (0:00:02.289) 0:00:02.889 ******* 2026-02-08 03:14:05.586152 | orchestrator | changed: [testbed-manager] 2026-02-08 03:14:05.586173 | orchestrator | 2026-02-08 03:14:05.586294 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-08 03:14:05.586316 | orchestrator | Sunday 08 February 2026 03:13:54 +0000 (0:00:16.801) 0:00:19.690 ******* 2026-02-08 03:14:05.586334 | orchestrator | ok: [testbed-manager] 2026-02-08 03:14:05.586352 | orchestrator | 2026-02-08 03:14:05.586371 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-08 03:14:05.586389 | orchestrator | Sunday 08 February 2026 03:13:55 +0000 (0:00:01.476) 0:00:21.167 ******* 2026-02-08 03:14:05.586409 | orchestrator | changed: [testbed-manager] 2026-02-08 03:14:05.586429 | orchestrator | 2026-02-08 03:14:05.586450 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-08 03:14:05.586468 | orchestrator | Sunday 08 February 2026 03:13:56 +0000 (0:00:00.956) 0:00:22.124 ******* 2026-02-08 03:14:05.586483 | orchestrator | ok: [testbed-manager] 2026-02-08 03:14:05.586495 | orchestrator | 2026-02-08 03:14:05.586506 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-08 03:14:05.586518 | orchestrator | Sunday 08 February 2026 03:13:58 +0000 (0:00:01.337) 0:00:23.462 ******* 2026-02-08 03:14:05.586529 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:14:05.586540 | orchestrator | 2026-02-08 03:14:05.586551 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-08 03:14:05.586562 | orchestrator | Sunday 08 February 2026 03:13:58 +0000 (0:00:00.167) 0:00:23.630 ******* 2026-02-08 03:14:05.586597 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:14:05.586610 | orchestrator | 2026-02-08 03:14:05.586620 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-08 03:14:05.586631 | orchestrator | Sunday 08 February 2026 03:13:58 +0000 (0:00:00.179) 0:00:23.809 ******* 2026-02-08 03:14:05.586642 | orchestrator | changed: [testbed-manager] 2026-02-08 03:14:05.586653 | orchestrator | 2026-02-08 03:14:05.586664 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-08 03:14:05.586675 | orchestrator | Sunday 08 February 2026 03:13:59 +0000 (0:00:01.043) 0:00:24.853 ******* 2026-02-08 03:14:05.586686 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-08 03:14:05.586697 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-08 03:14:05.586709 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-08 03:14:05.586720 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-08 03:14:05.586731 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-08 03:14:05.586742 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-08 03:14:05.586753 | orchestrator | 2026-02-08 03:14:05.586764 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-08 03:14:05.586775 | orchestrator | Sunday 08 February 2026 03:14:01 +0000 (0:00:02.501) 0:00:27.354 ******* 2026-02-08 03:14:05.586785 | orchestrator | ok: [testbed-manager] 2026-02-08 03:14:05.586796 | orchestrator | 2026-02-08 03:14:05.586807 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-08 03:14:05.586818 | orchestrator | Sunday 08 February 2026 03:14:03 +0000 (0:00:01.770) 0:00:29.125 ******* 2026-02-08 03:14:05.586828 | orchestrator | changed: [testbed-manager] 2026-02-08 03:14:05.586839 | orchestrator | 2026-02-08 03:14:05.586850 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:14:05.586861 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:14:05.586872 | orchestrator | 2026-02-08 03:14:05.586882 | orchestrator | 2026-02-08 03:14:05.586902 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:14:05.586913 | orchestrator | Sunday 08 February 2026 03:14:05 +0000 (0:00:01.478) 0:00:30.604 ******* 2026-02-08 03:14:05.586923 | orchestrator | =============================================================================== 2026-02-08 03:14:05.586934 | orchestrator | osism.services.frr : Install frr package ------------------------------- 16.80s 2026-02-08 03:14:05.586945 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.50s 2026-02-08 03:14:05.586956 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.29s 2026-02-08 03:14:05.586966 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.77s 2026-02-08 03:14:05.586977 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.48s 2026-02-08 03:14:05.587007 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.48s 2026-02-08 03:14:05.587018 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.34s 2026-02-08 03:14:05.587029 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.04s 2026-02-08 03:14:05.587040 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.96s 2026-02-08 03:14:05.587050 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.28s 2026-02-08 03:14:05.587061 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-02-08 03:14:05.587072 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.17s 2026-02-08 03:14:06.004070 | orchestrator | + osism apply kubernetes 2026-02-08 03:14:08.397258 | orchestrator | 2026-02-08 03:14:08 | INFO  | Task fc5ede04-5aac-41cb-8347-f303379cc0cd (kubernetes) was prepared for execution. 2026-02-08 03:14:08.398036 | orchestrator | 2026-02-08 03:14:08 | INFO  | It takes a moment until task fc5ede04-5aac-41cb-8347-f303379cc0cd (kubernetes) has been started and output is visible here. 2026-02-08 03:14:34.164512 | orchestrator | 2026-02-08 03:14:34.164613 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-08 03:14:34.164627 | orchestrator | 2026-02-08 03:14:34.164635 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-08 03:14:34.164643 | orchestrator | Sunday 08 February 2026 03:14:13 +0000 (0:00:00.228) 0:00:00.228 ******* 2026-02-08 03:14:34.164650 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:14:34.164658 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:14:34.164665 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:14:34.164672 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:14:34.164679 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:14:34.164685 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:14:34.164692 | orchestrator | 2026-02-08 03:14:34.164699 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-08 03:14:34.164706 | orchestrator | Sunday 08 February 2026 03:14:14 +0000 (0:00:01.052) 0:00:01.280 ******* 2026-02-08 03:14:34.164716 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.164727 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.164738 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.164749 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.164760 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.164771 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.164781 | orchestrator | 2026-02-08 03:14:34.164789 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-08 03:14:34.164797 | orchestrator | Sunday 08 February 2026 03:14:15 +0000 (0:00:00.619) 0:00:01.900 ******* 2026-02-08 03:14:34.164804 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.164811 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.164818 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.164824 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.164831 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.164838 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.164845 | orchestrator | 2026-02-08 03:14:34.164851 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-08 03:14:34.164858 | orchestrator | Sunday 08 February 2026 03:14:16 +0000 (0:00:00.795) 0:00:02.695 ******* 2026-02-08 03:14:34.164865 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:14:34.164872 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:14:34.164878 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:14:34.164889 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:14:34.164896 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:14:34.164903 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:14:34.164910 | orchestrator | 2026-02-08 03:14:34.164916 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-08 03:14:34.164924 | orchestrator | Sunday 08 February 2026 03:14:18 +0000 (0:00:02.096) 0:00:04.792 ******* 2026-02-08 03:14:34.164930 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:14:34.164937 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:14:34.164944 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:14:34.164950 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:14:34.164957 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:14:34.164964 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:14:34.164971 | orchestrator | 2026-02-08 03:14:34.164978 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-08 03:14:34.164990 | orchestrator | Sunday 08 February 2026 03:14:19 +0000 (0:00:01.079) 0:00:05.871 ******* 2026-02-08 03:14:34.164998 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:14:34.165029 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:14:34.165042 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:14:34.165054 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:14:34.165062 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:14:34.165070 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:14:34.165078 | orchestrator | 2026-02-08 03:14:34.165094 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-08 03:14:34.165102 | orchestrator | Sunday 08 February 2026 03:14:20 +0000 (0:00:00.918) 0:00:06.790 ******* 2026-02-08 03:14:34.165110 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.165118 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.165125 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.165133 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.165141 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.165148 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.165157 | orchestrator | 2026-02-08 03:14:34.165165 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-08 03:14:34.165172 | orchestrator | Sunday 08 February 2026 03:14:20 +0000 (0:00:00.604) 0:00:07.394 ******* 2026-02-08 03:14:34.165180 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.165240 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.165249 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.165257 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.165265 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.165272 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.165280 | orchestrator | 2026-02-08 03:14:34.165288 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-08 03:14:34.165296 | orchestrator | Sunday 08 February 2026 03:14:21 +0000 (0:00:00.785) 0:00:08.180 ******* 2026-02-08 03:14:34.165304 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 03:14:34.165312 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 03:14:34.165319 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.165328 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 03:14:34.165336 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 03:14:34.165344 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.165352 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 03:14:34.165360 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 03:14:34.165368 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.165376 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 03:14:34.165400 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 03:14:34.165407 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.165414 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 03:14:34.165421 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 03:14:34.165427 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.165434 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 03:14:34.165441 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 03:14:34.165448 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.165455 | orchestrator | 2026-02-08 03:14:34.165461 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-08 03:14:34.165468 | orchestrator | Sunday 08 February 2026 03:14:22 +0000 (0:00:00.612) 0:00:08.793 ******* 2026-02-08 03:14:34.165475 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.165482 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.165488 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.165502 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.165509 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.165516 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.165523 | orchestrator | 2026-02-08 03:14:34.165529 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-08 03:14:34.165537 | orchestrator | Sunday 08 February 2026 03:14:23 +0000 (0:00:01.625) 0:00:10.418 ******* 2026-02-08 03:14:34.165544 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:14:34.165551 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:14:34.165557 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:14:34.165564 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:14:34.165570 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:14:34.165577 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:14:34.165584 | orchestrator | 2026-02-08 03:14:34.165590 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-08 03:14:34.165597 | orchestrator | Sunday 08 February 2026 03:14:24 +0000 (0:00:00.847) 0:00:11.266 ******* 2026-02-08 03:14:34.165604 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:14:34.165611 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:14:34.165617 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:14:34.165624 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:14:34.165630 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:14:34.165637 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:14:34.165643 | orchestrator | 2026-02-08 03:14:34.165650 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-08 03:14:34.165657 | orchestrator | Sunday 08 February 2026 03:14:29 +0000 (0:00:05.335) 0:00:16.601 ******* 2026-02-08 03:14:34.165664 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.165675 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.165682 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.165688 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.165695 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.165702 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.165708 | orchestrator | 2026-02-08 03:14:34.165715 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-08 03:14:34.165721 | orchestrator | Sunday 08 February 2026 03:14:30 +0000 (0:00:01.000) 0:00:17.602 ******* 2026-02-08 03:14:34.165728 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.165735 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.165741 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.165748 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.165754 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.165761 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.165767 | orchestrator | 2026-02-08 03:14:34.165774 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-08 03:14:34.165783 | orchestrator | Sunday 08 February 2026 03:14:32 +0000 (0:00:01.692) 0:00:19.295 ******* 2026-02-08 03:14:34.165789 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.165796 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.165803 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.165809 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.165816 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.165822 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.165829 | orchestrator | 2026-02-08 03:14:34.165835 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-08 03:14:34.165842 | orchestrator | Sunday 08 February 2026 03:14:33 +0000 (0:00:00.685) 0:00:19.980 ******* 2026-02-08 03:14:34.165849 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-08 03:14:34.165861 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-08 03:14:34.165868 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:14:34.165875 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-08 03:14:34.165887 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-08 03:14:34.165893 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:14:34.165900 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-08 03:14:34.165906 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-08 03:14:34.165913 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:14:34.165920 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-08 03:14:34.165926 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-08 03:14:34.165933 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:14:34.165939 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-08 03:14:34.165946 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-08 03:14:34.165952 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:14:34.165959 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-08 03:14:34.165966 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-08 03:14:34.165972 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:14:34.165979 | orchestrator | 2026-02-08 03:14:34.165986 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-08 03:14:34.165997 | orchestrator | Sunday 08 February 2026 03:14:34 +0000 (0:00:00.850) 0:00:20.831 ******* 2026-02-08 03:15:38.711949 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:15:38.712058 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:15:38.712073 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:15:38.712083 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:15:38.712093 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:15:38.712103 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:15:38.712257 | orchestrator | 2026-02-08 03:15:38.712274 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-08 03:15:38.712286 | orchestrator | Sunday 08 February 2026 03:14:34 +0000 (0:00:00.548) 0:00:21.379 ******* 2026-02-08 03:15:38.712296 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:15:38.712306 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:15:38.712317 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:15:38.712326 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:15:38.712336 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:15:38.712346 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:15:38.712356 | orchestrator | 2026-02-08 03:15:38.712366 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-08 03:15:38.712376 | orchestrator | 2026-02-08 03:15:38.712386 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-08 03:15:38.712396 | orchestrator | Sunday 08 February 2026 03:14:35 +0000 (0:00:01.238) 0:00:22.618 ******* 2026-02-08 03:15:38.712406 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:15:38.712417 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:15:38.712426 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:15:38.712436 | orchestrator | 2026-02-08 03:15:38.712446 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-08 03:15:38.712456 | orchestrator | Sunday 08 February 2026 03:14:37 +0000 (0:00:01.167) 0:00:23.785 ******* 2026-02-08 03:15:38.712465 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:15:38.712477 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:15:38.712488 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:15:38.712499 | orchestrator | 2026-02-08 03:15:38.712511 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-08 03:15:38.712522 | orchestrator | Sunday 08 February 2026 03:14:38 +0000 (0:00:01.481) 0:00:25.266 ******* 2026-02-08 03:15:38.712533 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:15:38.712544 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:15:38.712556 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:15:38.712596 | orchestrator | 2026-02-08 03:15:38.712610 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-08 03:15:38.712621 | orchestrator | Sunday 08 February 2026 03:14:39 +0000 (0:00:00.865) 0:00:26.132 ******* 2026-02-08 03:15:38.712655 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:15:38.712667 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:15:38.712677 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:15:38.712689 | orchestrator | 2026-02-08 03:15:38.712700 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-08 03:15:38.712711 | orchestrator | Sunday 08 February 2026 03:14:40 +0000 (0:00:00.664) 0:00:26.796 ******* 2026-02-08 03:15:38.712723 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:15:38.712734 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:15:38.712745 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:15:38.712756 | orchestrator | 2026-02-08 03:15:38.712767 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-08 03:15:38.712797 | orchestrator | Sunday 08 February 2026 03:14:40 +0000 (0:00:00.321) 0:00:27.117 ******* 2026-02-08 03:15:38.712808 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:15:38.712820 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:15:38.712832 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:15:38.712843 | orchestrator | 2026-02-08 03:15:38.712854 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-08 03:15:38.712866 | orchestrator | Sunday 08 February 2026 03:14:41 +0000 (0:00:00.854) 0:00:27.971 ******* 2026-02-08 03:15:38.712875 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:15:38.712885 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:15:38.712894 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:15:38.712904 | orchestrator | 2026-02-08 03:15:38.712914 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-08 03:15:38.712923 | orchestrator | Sunday 08 February 2026 03:14:42 +0000 (0:00:01.307) 0:00:29.279 ******* 2026-02-08 03:15:38.712933 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:15:38.712942 | orchestrator | 2026-02-08 03:15:38.712952 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-08 03:15:38.712962 | orchestrator | Sunday 08 February 2026 03:14:43 +0000 (0:00:00.599) 0:00:29.879 ******* 2026-02-08 03:15:38.712971 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:15:38.712981 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:15:38.712990 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:15:38.713000 | orchestrator | 2026-02-08 03:15:38.713009 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-08 03:15:38.713019 | orchestrator | Sunday 08 February 2026 03:14:44 +0000 (0:00:01.664) 0:00:31.543 ******* 2026-02-08 03:15:38.713029 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:15:38.713038 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:15:38.713048 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:15:38.713058 | orchestrator | 2026-02-08 03:15:38.713067 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-08 03:15:38.713077 | orchestrator | Sunday 08 February 2026 03:14:45 +0000 (0:00:00.502) 0:00:32.046 ******* 2026-02-08 03:15:38.713086 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:15:38.713096 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:15:38.713105 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:15:38.713115 | orchestrator | 2026-02-08 03:15:38.713125 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-08 03:15:38.713134 | orchestrator | Sunday 08 February 2026 03:14:46 +0000 (0:00:01.037) 0:00:33.083 ******* 2026-02-08 03:15:38.713144 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:15:38.713153 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:15:38.713163 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:15:38.713264 | orchestrator | 2026-02-08 03:15:38.713279 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-08 03:15:38.713308 | orchestrator | Sunday 08 February 2026 03:14:47 +0000 (0:00:01.204) 0:00:34.287 ******* 2026-02-08 03:15:38.713319 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:15:38.713339 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:15:38.713349 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:15:38.713359 | orchestrator | 2026-02-08 03:15:38.713369 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-08 03:15:38.713379 | orchestrator | Sunday 08 February 2026 03:14:48 +0000 (0:00:00.539) 0:00:34.827 ******* 2026-02-08 03:15:38.713388 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:15:38.713398 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:15:38.713408 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:15:38.713417 | orchestrator | 2026-02-08 03:15:38.713427 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-08 03:15:38.713437 | orchestrator | Sunday 08 February 2026 03:14:48 +0000 (0:00:00.292) 0:00:35.119 ******* 2026-02-08 03:15:38.713446 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:15:38.713456 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:15:38.713466 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:15:38.713476 | orchestrator | 2026-02-08 03:15:38.713492 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-08 03:15:38.713502 | orchestrator | Sunday 08 February 2026 03:14:49 +0000 (0:00:01.141) 0:00:36.261 ******* 2026-02-08 03:15:38.713512 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:15:38.713521 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:15:38.713531 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:15:38.713541 | orchestrator | 2026-02-08 03:15:38.713551 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-08 03:15:38.713560 | orchestrator | Sunday 08 February 2026 03:14:52 +0000 (0:00:02.833) 0:00:39.094 ******* 2026-02-08 03:15:38.713570 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:15:38.713580 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:15:38.713589 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:15:38.713603 | orchestrator | 2026-02-08 03:15:38.713613 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-08 03:15:38.713624 | orchestrator | Sunday 08 February 2026 03:14:52 +0000 (0:00:00.428) 0:00:39.523 ******* 2026-02-08 03:15:38.713634 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-08 03:15:38.713646 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-08 03:15:38.713656 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-08 03:15:38.713682 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-08 03:15:38.713693 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-08 03:15:38.713716 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-08 03:15:38.713736 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-08 03:15:38.713746 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-08 03:15:38.713755 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-08 03:15:38.713765 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-08 03:15:38.713774 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-08 03:15:38.713793 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-08 03:15:38.713803 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:15:38.713813 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:15:38.713822 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:15:38.713829 | orchestrator | 2026-02-08 03:15:38.713837 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-08 03:15:38.713845 | orchestrator | Sunday 08 February 2026 03:15:36 +0000 (0:00:43.467) 0:01:22.990 ******* 2026-02-08 03:15:38.713853 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:15:38.713861 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:15:38.713869 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:15:38.713877 | orchestrator | 2026-02-08 03:15:38.713885 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-08 03:15:38.713893 | orchestrator | Sunday 08 February 2026 03:15:36 +0000 (0:00:00.318) 0:01:23.308 ******* 2026-02-08 03:15:38.713901 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:15:38.713913 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:15:38.713921 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:15:38.713929 | orchestrator | 2026-02-08 03:15:38.713938 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-08 03:15:38.713946 | orchestrator | Sunday 08 February 2026 03:15:37 +0000 (0:00:00.925) 0:01:24.234 ******* 2026-02-08 03:15:38.713953 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:15:38.713961 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:15:38.713969 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:15:38.713977 | orchestrator | 2026-02-08 03:15:38.713999 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-08 03:16:19.140146 | orchestrator | Sunday 08 February 2026 03:15:38 +0000 (0:00:01.147) 0:01:25.382 ******* 2026-02-08 03:16:19.140315 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:16:19.140332 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:16:19.140343 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:16:19.140353 | orchestrator | 2026-02-08 03:16:19.140364 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-08 03:16:19.140374 | orchestrator | Sunday 08 February 2026 03:16:03 +0000 (0:00:25.230) 0:01:50.612 ******* 2026-02-08 03:16:19.140385 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:16:19.140411 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:16:19.140421 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:16:19.140431 | orchestrator | 2026-02-08 03:16:19.140441 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-08 03:16:19.140451 | orchestrator | Sunday 08 February 2026 03:16:04 +0000 (0:00:00.629) 0:01:51.242 ******* 2026-02-08 03:16:19.140461 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:16:19.140471 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:16:19.140480 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:16:19.140491 | orchestrator | 2026-02-08 03:16:19.140501 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-08 03:16:19.140510 | orchestrator | Sunday 08 February 2026 03:16:05 +0000 (0:00:00.664) 0:01:51.907 ******* 2026-02-08 03:16:19.140520 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:16:19.140530 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:16:19.140540 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:16:19.140550 | orchestrator | 2026-02-08 03:16:19.140559 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-08 03:16:19.140569 | orchestrator | Sunday 08 February 2026 03:16:05 +0000 (0:00:00.640) 0:01:52.547 ******* 2026-02-08 03:16:19.140579 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:16:19.140588 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:16:19.140598 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:16:19.140608 | orchestrator | 2026-02-08 03:16:19.140617 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-08 03:16:19.140627 | orchestrator | Sunday 08 February 2026 03:16:06 +0000 (0:00:00.798) 0:01:53.345 ******* 2026-02-08 03:16:19.140661 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:16:19.140671 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:16:19.140681 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:16:19.140690 | orchestrator | 2026-02-08 03:16:19.140700 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-08 03:16:19.140709 | orchestrator | Sunday 08 February 2026 03:16:06 +0000 (0:00:00.300) 0:01:53.646 ******* 2026-02-08 03:16:19.140718 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:16:19.140728 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:16:19.140737 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:16:19.140746 | orchestrator | 2026-02-08 03:16:19.140756 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-08 03:16:19.140766 | orchestrator | Sunday 08 February 2026 03:16:07 +0000 (0:00:00.613) 0:01:54.260 ******* 2026-02-08 03:16:19.140775 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:16:19.140785 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:16:19.140794 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:16:19.140804 | orchestrator | 2026-02-08 03:16:19.140820 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-08 03:16:19.140836 | orchestrator | Sunday 08 February 2026 03:16:08 +0000 (0:00:00.649) 0:01:54.909 ******* 2026-02-08 03:16:19.140852 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:16:19.140869 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:16:19.140882 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:16:19.140891 | orchestrator | 2026-02-08 03:16:19.140902 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-08 03:16:19.140920 | orchestrator | Sunday 08 February 2026 03:16:09 +0000 (0:00:00.878) 0:01:55.788 ******* 2026-02-08 03:16:19.140936 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:16:19.140947 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:16:19.140957 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:16:19.140966 | orchestrator | 2026-02-08 03:16:19.140976 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-08 03:16:19.140986 | orchestrator | Sunday 08 February 2026 03:16:10 +0000 (0:00:01.068) 0:01:56.857 ******* 2026-02-08 03:16:19.140995 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:16:19.141005 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:16:19.141014 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:16:19.141024 | orchestrator | 2026-02-08 03:16:19.141036 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-08 03:16:19.141045 | orchestrator | Sunday 08 February 2026 03:16:10 +0000 (0:00:00.278) 0:01:57.136 ******* 2026-02-08 03:16:19.141055 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:16:19.141065 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:16:19.141074 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:16:19.141083 | orchestrator | 2026-02-08 03:16:19.141093 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-08 03:16:19.141103 | orchestrator | Sunday 08 February 2026 03:16:10 +0000 (0:00:00.282) 0:01:57.419 ******* 2026-02-08 03:16:19.141112 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:16:19.141122 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:16:19.141131 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:16:19.141141 | orchestrator | 2026-02-08 03:16:19.141150 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-08 03:16:19.141160 | orchestrator | Sunday 08 February 2026 03:16:11 +0000 (0:00:00.653) 0:01:58.073 ******* 2026-02-08 03:16:19.141231 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:16:19.141241 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:16:19.141251 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:16:19.141260 | orchestrator | 2026-02-08 03:16:19.141270 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-08 03:16:19.141282 | orchestrator | Sunday 08 February 2026 03:16:12 +0000 (0:00:00.903) 0:01:58.976 ******* 2026-02-08 03:16:19.141301 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-08 03:16:19.141312 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-08 03:16:19.141351 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-08 03:16:19.141361 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-08 03:16:19.141371 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-08 03:16:19.141381 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-08 03:16:19.141390 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-08 03:16:19.141401 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-08 03:16:19.141410 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-08 03:16:19.141420 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-08 03:16:19.141430 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-08 03:16:19.141439 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-08 03:16:19.141449 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-08 03:16:19.141466 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-08 03:16:19.141483 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-08 03:16:19.141498 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-08 03:16:19.141509 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-08 03:16:19.141518 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-08 03:16:19.141528 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-08 03:16:19.141537 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-08 03:16:19.141547 | orchestrator | 2026-02-08 03:16:19.141556 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-08 03:16:19.141566 | orchestrator | 2026-02-08 03:16:19.141575 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-08 03:16:19.141585 | orchestrator | Sunday 08 February 2026 03:16:15 +0000 (0:00:02.978) 0:02:01.955 ******* 2026-02-08 03:16:19.141594 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:16:19.141604 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:16:19.141613 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:16:19.141623 | orchestrator | 2026-02-08 03:16:19.141632 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-08 03:16:19.141642 | orchestrator | Sunday 08 February 2026 03:16:15 +0000 (0:00:00.343) 0:02:02.298 ******* 2026-02-08 03:16:19.141651 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:16:19.141661 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:16:19.141670 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:16:19.141679 | orchestrator | 2026-02-08 03:16:19.141689 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-08 03:16:19.141698 | orchestrator | Sunday 08 February 2026 03:16:16 +0000 (0:00:00.844) 0:02:03.143 ******* 2026-02-08 03:16:19.141708 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:16:19.141726 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:16:19.141736 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:16:19.141746 | orchestrator | 2026-02-08 03:16:19.141755 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-08 03:16:19.141771 | orchestrator | Sunday 08 February 2026 03:16:16 +0000 (0:00:00.333) 0:02:03.477 ******* 2026-02-08 03:16:19.141781 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:16:19.141791 | orchestrator | 2026-02-08 03:16:19.141800 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-08 03:16:19.141810 | orchestrator | Sunday 08 February 2026 03:16:17 +0000 (0:00:00.494) 0:02:03.971 ******* 2026-02-08 03:16:19.141819 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:16:19.141829 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:16:19.141839 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:16:19.141848 | orchestrator | 2026-02-08 03:16:19.141858 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-08 03:16:19.141867 | orchestrator | Sunday 08 February 2026 03:16:17 +0000 (0:00:00.567) 0:02:04.538 ******* 2026-02-08 03:16:19.141877 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:16:19.141887 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:16:19.141896 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:16:19.141905 | orchestrator | 2026-02-08 03:16:19.141915 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-08 03:16:19.141924 | orchestrator | Sunday 08 February 2026 03:16:18 +0000 (0:00:00.308) 0:02:04.847 ******* 2026-02-08 03:16:19.141934 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:16:19.141944 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:16:19.141953 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:16:19.141963 | orchestrator | 2026-02-08 03:16:19.141972 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-08 03:16:19.141982 | orchestrator | Sunday 08 February 2026 03:16:18 +0000 (0:00:00.341) 0:02:05.188 ******* 2026-02-08 03:16:19.141991 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:16:19.142001 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:16:19.142010 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:16:19.142080 | orchestrator | 2026-02-08 03:16:19.142099 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-08 03:17:50.093885 | orchestrator | Sunday 08 February 2026 03:16:19 +0000 (0:00:00.625) 0:02:05.814 ******* 2026-02-08 03:17:50.093984 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:17:50.093999 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:17:50.094011 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:17:50.094084 | orchestrator | 2026-02-08 03:17:50.094101 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-08 03:17:50.094118 | orchestrator | Sunday 08 February 2026 03:16:20 +0000 (0:00:01.349) 0:02:07.163 ******* 2026-02-08 03:17:50.094128 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:17:50.094137 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:17:50.094146 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:17:50.094202 | orchestrator | 2026-02-08 03:17:50.094212 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-08 03:17:50.094221 | orchestrator | Sunday 08 February 2026 03:16:21 +0000 (0:00:01.183) 0:02:08.347 ******* 2026-02-08 03:17:50.094230 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:17:50.094239 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:17:50.094248 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:17:50.094257 | orchestrator | 2026-02-08 03:17:50.094266 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-08 03:17:50.094275 | orchestrator | 2026-02-08 03:17:50.094284 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-08 03:17:50.094293 | orchestrator | Sunday 08 February 2026 03:16:31 +0000 (0:00:09.785) 0:02:18.133 ******* 2026-02-08 03:17:50.094302 | orchestrator | ok: [testbed-manager] 2026-02-08 03:17:50.094312 | orchestrator | 2026-02-08 03:17:50.094321 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-08 03:17:50.094329 | orchestrator | Sunday 08 February 2026 03:16:32 +0000 (0:00:00.772) 0:02:18.905 ******* 2026-02-08 03:17:50.094380 | orchestrator | changed: [testbed-manager] 2026-02-08 03:17:50.094390 | orchestrator | 2026-02-08 03:17:50.094399 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-08 03:17:50.094408 | orchestrator | Sunday 08 February 2026 03:16:32 +0000 (0:00:00.699) 0:02:19.605 ******* 2026-02-08 03:17:50.094417 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-08 03:17:50.094426 | orchestrator | 2026-02-08 03:17:50.094435 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-08 03:17:50.094446 | orchestrator | Sunday 08 February 2026 03:16:33 +0000 (0:00:00.542) 0:02:20.148 ******* 2026-02-08 03:17:50.094456 | orchestrator | changed: [testbed-manager] 2026-02-08 03:17:50.094467 | orchestrator | 2026-02-08 03:17:50.094478 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-08 03:17:50.094489 | orchestrator | Sunday 08 February 2026 03:16:34 +0000 (0:00:00.939) 0:02:21.087 ******* 2026-02-08 03:17:50.094500 | orchestrator | changed: [testbed-manager] 2026-02-08 03:17:50.094510 | orchestrator | 2026-02-08 03:17:50.094521 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-08 03:17:50.094531 | orchestrator | Sunday 08 February 2026 03:16:34 +0000 (0:00:00.580) 0:02:21.668 ******* 2026-02-08 03:17:50.094542 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-08 03:17:50.094552 | orchestrator | 2026-02-08 03:17:50.094562 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-08 03:17:50.094573 | orchestrator | Sunday 08 February 2026 03:16:36 +0000 (0:00:01.625) 0:02:23.294 ******* 2026-02-08 03:17:50.094583 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-08 03:17:50.094593 | orchestrator | 2026-02-08 03:17:50.094604 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-08 03:17:50.094614 | orchestrator | Sunday 08 February 2026 03:16:37 +0000 (0:00:00.868) 0:02:24.162 ******* 2026-02-08 03:17:50.094625 | orchestrator | changed: [testbed-manager] 2026-02-08 03:17:50.094635 | orchestrator | 2026-02-08 03:17:50.094646 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-08 03:17:50.094659 | orchestrator | Sunday 08 February 2026 03:16:37 +0000 (0:00:00.447) 0:02:24.609 ******* 2026-02-08 03:17:50.094673 | orchestrator | changed: [testbed-manager] 2026-02-08 03:17:50.094685 | orchestrator | 2026-02-08 03:17:50.094714 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-08 03:17:50.094728 | orchestrator | 2026-02-08 03:17:50.094747 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-08 03:17:50.094760 | orchestrator | Sunday 08 February 2026 03:16:38 +0000 (0:00:00.475) 0:02:25.085 ******* 2026-02-08 03:17:50.094773 | orchestrator | ok: [testbed-manager] 2026-02-08 03:17:50.094787 | orchestrator | 2026-02-08 03:17:50.094801 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-08 03:17:50.094815 | orchestrator | Sunday 08 February 2026 03:16:38 +0000 (0:00:00.357) 0:02:25.442 ******* 2026-02-08 03:17:50.094828 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 03:17:50.094840 | orchestrator | 2026-02-08 03:17:50.094851 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-08 03:17:50.094862 | orchestrator | Sunday 08 February 2026 03:16:39 +0000 (0:00:00.263) 0:02:25.705 ******* 2026-02-08 03:17:50.094873 | orchestrator | ok: [testbed-manager] 2026-02-08 03:17:50.094884 | orchestrator | 2026-02-08 03:17:50.094895 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-08 03:17:50.094905 | orchestrator | Sunday 08 February 2026 03:16:39 +0000 (0:00:00.841) 0:02:26.547 ******* 2026-02-08 03:17:50.094916 | orchestrator | ok: [testbed-manager] 2026-02-08 03:17:50.094927 | orchestrator | 2026-02-08 03:17:50.094938 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-08 03:17:50.094949 | orchestrator | Sunday 08 February 2026 03:16:41 +0000 (0:00:01.623) 0:02:28.171 ******* 2026-02-08 03:17:50.094968 | orchestrator | changed: [testbed-manager] 2026-02-08 03:17:50.094979 | orchestrator | 2026-02-08 03:17:50.095073 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-08 03:17:50.095088 | orchestrator | Sunday 08 February 2026 03:16:42 +0000 (0:00:00.846) 0:02:29.017 ******* 2026-02-08 03:17:50.095099 | orchestrator | ok: [testbed-manager] 2026-02-08 03:17:50.095110 | orchestrator | 2026-02-08 03:17:50.095121 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-08 03:17:50.095177 | orchestrator | Sunday 08 February 2026 03:16:42 +0000 (0:00:00.468) 0:02:29.486 ******* 2026-02-08 03:17:50.095191 | orchestrator | changed: [testbed-manager] 2026-02-08 03:17:50.095203 | orchestrator | 2026-02-08 03:17:50.095214 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-08 03:17:50.095224 | orchestrator | Sunday 08 February 2026 03:16:50 +0000 (0:00:07.859) 0:02:37.346 ******* 2026-02-08 03:17:50.095235 | orchestrator | changed: [testbed-manager] 2026-02-08 03:17:50.095246 | orchestrator | 2026-02-08 03:17:50.095257 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-08 03:17:50.095268 | orchestrator | Sunday 08 February 2026 03:17:03 +0000 (0:00:12.661) 0:02:50.007 ******* 2026-02-08 03:17:50.095279 | orchestrator | ok: [testbed-manager] 2026-02-08 03:17:50.095289 | orchestrator | 2026-02-08 03:17:50.095300 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-08 03:17:50.095311 | orchestrator | 2026-02-08 03:17:50.095322 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-08 03:17:50.095333 | orchestrator | Sunday 08 February 2026 03:17:04 +0000 (0:00:00.801) 0:02:50.809 ******* 2026-02-08 03:17:50.095344 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:17:50.095355 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:17:50.095365 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:17:50.095376 | orchestrator | 2026-02-08 03:17:50.095387 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-08 03:17:50.095398 | orchestrator | Sunday 08 February 2026 03:17:04 +0000 (0:00:00.304) 0:02:51.114 ******* 2026-02-08 03:17:50.095409 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:17:50.095419 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:17:50.095430 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:17:50.095441 | orchestrator | 2026-02-08 03:17:50.095452 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-08 03:17:50.095463 | orchestrator | Sunday 08 February 2026 03:17:04 +0000 (0:00:00.316) 0:02:51.430 ******* 2026-02-08 03:17:50.095474 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:17:50.095485 | orchestrator | 2026-02-08 03:17:50.095496 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-08 03:17:50.095507 | orchestrator | Sunday 08 February 2026 03:17:05 +0000 (0:00:00.728) 0:02:52.159 ******* 2026-02-08 03:17:50.095517 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-08 03:17:50.095528 | orchestrator | 2026-02-08 03:17:50.095539 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-08 03:17:50.095550 | orchestrator | Sunday 08 February 2026 03:17:06 +0000 (0:00:00.813) 0:02:52.972 ******* 2026-02-08 03:17:50.095561 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 03:17:50.095572 | orchestrator | 2026-02-08 03:17:50.095583 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-08 03:17:50.095594 | orchestrator | Sunday 08 February 2026 03:17:07 +0000 (0:00:00.906) 0:02:53.879 ******* 2026-02-08 03:17:50.095605 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:17:50.095615 | orchestrator | 2026-02-08 03:17:50.095626 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-08 03:17:50.095637 | orchestrator | Sunday 08 February 2026 03:17:07 +0000 (0:00:00.113) 0:02:53.993 ******* 2026-02-08 03:17:50.095648 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 03:17:50.095659 | orchestrator | 2026-02-08 03:17:50.095679 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-08 03:17:50.095690 | orchestrator | Sunday 08 February 2026 03:17:08 +0000 (0:00:00.975) 0:02:54.968 ******* 2026-02-08 03:17:50.095700 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:17:50.095711 | orchestrator | 2026-02-08 03:17:50.095722 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-08 03:17:50.095733 | orchestrator | Sunday 08 February 2026 03:17:08 +0000 (0:00:00.122) 0:02:55.091 ******* 2026-02-08 03:17:50.095744 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:17:50.095754 | orchestrator | 2026-02-08 03:17:50.095765 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-08 03:17:50.095776 | orchestrator | Sunday 08 February 2026 03:17:08 +0000 (0:00:00.115) 0:02:55.207 ******* 2026-02-08 03:17:50.095787 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:17:50.095797 | orchestrator | 2026-02-08 03:17:50.095808 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-08 03:17:50.095819 | orchestrator | Sunday 08 February 2026 03:17:08 +0000 (0:00:00.122) 0:02:55.329 ******* 2026-02-08 03:17:50.095830 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:17:50.095841 | orchestrator | 2026-02-08 03:17:50.095852 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-08 03:17:50.095863 | orchestrator | Sunday 08 February 2026 03:17:08 +0000 (0:00:00.121) 0:02:55.451 ******* 2026-02-08 03:17:50.095874 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-08 03:17:50.095885 | orchestrator | 2026-02-08 03:17:50.095896 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-08 03:17:50.095913 | orchestrator | Sunday 08 February 2026 03:17:15 +0000 (0:00:06.376) 0:03:01.827 ******* 2026-02-08 03:17:50.095924 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-08 03:17:50.095936 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-08 03:17:50.095947 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-08 03:17:50.095958 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-08 03:17:50.095969 | orchestrator | 2026-02-08 03:17:50.095979 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-08 03:17:50.095990 | orchestrator | Sunday 08 February 2026 03:17:48 +0000 (0:00:33.617) 0:03:35.445 ******* 2026-02-08 03:17:50.096001 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 03:17:50.096012 | orchestrator | 2026-02-08 03:17:50.096023 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-08 03:17:50.096042 | orchestrator | Sunday 08 February 2026 03:17:50 +0000 (0:00:01.320) 0:03:36.766 ******* 2026-02-08 03:18:12.142405 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-08 03:18:12.142498 | orchestrator | 2026-02-08 03:18:12.142510 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-08 03:18:12.142520 | orchestrator | Sunday 08 February 2026 03:17:51 +0000 (0:00:01.549) 0:03:38.315 ******* 2026-02-08 03:18:12.142529 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-08 03:18:12.142537 | orchestrator | 2026-02-08 03:18:12.142545 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-08 03:18:12.142554 | orchestrator | Sunday 08 February 2026 03:17:53 +0000 (0:00:01.380) 0:03:39.695 ******* 2026-02-08 03:18:12.142562 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:18:12.142570 | orchestrator | 2026-02-08 03:18:12.142578 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-08 03:18:12.142585 | orchestrator | Sunday 08 February 2026 03:17:53 +0000 (0:00:00.196) 0:03:39.892 ******* 2026-02-08 03:18:12.142593 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-08 03:18:12.142602 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-08 03:18:12.142610 | orchestrator | 2026-02-08 03:18:12.142640 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-08 03:18:12.142649 | orchestrator | Sunday 08 February 2026 03:17:55 +0000 (0:00:01.965) 0:03:41.858 ******* 2026-02-08 03:18:12.142656 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:18:12.142664 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:18:12.142672 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:18:12.142679 | orchestrator | 2026-02-08 03:18:12.142687 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-08 03:18:12.142695 | orchestrator | Sunday 08 February 2026 03:17:55 +0000 (0:00:00.301) 0:03:42.159 ******* 2026-02-08 03:18:12.142703 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:18:12.142711 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:18:12.142718 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:18:12.142726 | orchestrator | 2026-02-08 03:18:12.142734 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-08 03:18:12.142741 | orchestrator | 2026-02-08 03:18:12.142749 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-08 03:18:12.142757 | orchestrator | Sunday 08 February 2026 03:17:56 +0000 (0:00:00.871) 0:03:43.030 ******* 2026-02-08 03:18:12.142765 | orchestrator | ok: [testbed-manager] 2026-02-08 03:18:12.142772 | orchestrator | 2026-02-08 03:18:12.142780 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-08 03:18:12.142788 | orchestrator | Sunday 08 February 2026 03:17:56 +0000 (0:00:00.380) 0:03:43.411 ******* 2026-02-08 03:18:12.142796 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 03:18:12.142803 | orchestrator | 2026-02-08 03:18:12.142811 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-08 03:18:12.142820 | orchestrator | Sunday 08 February 2026 03:17:56 +0000 (0:00:00.258) 0:03:43.669 ******* 2026-02-08 03:18:12.142827 | orchestrator | changed: [testbed-manager] 2026-02-08 03:18:12.142835 | orchestrator | 2026-02-08 03:18:12.142843 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-08 03:18:12.142850 | orchestrator | 2026-02-08 03:18:12.142858 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-08 03:18:12.142866 | orchestrator | Sunday 08 February 2026 03:18:02 +0000 (0:00:05.308) 0:03:48.978 ******* 2026-02-08 03:18:12.142873 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:18:12.142881 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:18:12.142889 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:18:12.142896 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:18:12.142904 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:18:12.142912 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:18:12.142919 | orchestrator | 2026-02-08 03:18:12.142927 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-08 03:18:12.142935 | orchestrator | Sunday 08 February 2026 03:18:02 +0000 (0:00:00.580) 0:03:49.558 ******* 2026-02-08 03:18:12.142942 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-08 03:18:12.142950 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-08 03:18:12.142959 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-08 03:18:12.142969 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-08 03:18:12.142978 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-08 03:18:12.142987 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-08 03:18:12.142996 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-08 03:18:12.143006 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-08 03:18:12.143015 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-08 03:18:12.143030 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-08 03:18:12.143040 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-08 03:18:12.143048 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-08 03:18:12.143058 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-08 03:18:12.143067 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-08 03:18:12.143076 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-08 03:18:12.143099 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-08 03:18:12.143109 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-08 03:18:12.143118 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-08 03:18:12.143128 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-08 03:18:12.143137 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-08 03:18:12.143167 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-08 03:18:12.143192 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-08 03:18:12.143202 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-08 03:18:12.143211 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-08 03:18:12.143220 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-08 03:18:12.143230 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-08 03:18:12.143239 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-08 03:18:12.143248 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-08 03:18:12.143257 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-08 03:18:12.143267 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-08 03:18:12.143276 | orchestrator | 2026-02-08 03:18:12.143285 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-08 03:18:12.143294 | orchestrator | Sunday 08 February 2026 03:18:10 +0000 (0:00:08.100) 0:03:57.659 ******* 2026-02-08 03:18:12.143304 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:18:12.143314 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:18:12.143322 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:18:12.143330 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:18:12.143338 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:18:12.143345 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:18:12.143353 | orchestrator | 2026-02-08 03:18:12.143361 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-08 03:18:12.143369 | orchestrator | Sunday 08 February 2026 03:18:11 +0000 (0:00:00.518) 0:03:58.177 ******* 2026-02-08 03:18:12.143377 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:18:12.143385 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:18:12.143392 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:18:12.143400 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:18:12.143408 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:18:12.143416 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:18:12.143423 | orchestrator | 2026-02-08 03:18:12.143431 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:18:12.143439 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:18:12.143459 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-08 03:18:12.143468 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-08 03:18:12.143475 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-08 03:18:12.143483 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 03:18:12.143491 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 03:18:12.143499 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 03:18:12.143507 | orchestrator | 2026-02-08 03:18:12.143514 | orchestrator | 2026-02-08 03:18:12.143522 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:18:12.143530 | orchestrator | Sunday 08 February 2026 03:18:12 +0000 (0:00:00.629) 0:03:58.807 ******* 2026-02-08 03:18:12.143538 | orchestrator | =============================================================================== 2026-02-08 03:18:12.143546 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.47s 2026-02-08 03:18:12.143554 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 33.62s 2026-02-08 03:18:12.143562 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.23s 2026-02-08 03:18:12.143570 | orchestrator | kubectl : Install required packages ------------------------------------ 12.66s 2026-02-08 03:18:12.143577 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.79s 2026-02-08 03:18:12.143585 | orchestrator | Manage labels ----------------------------------------------------------- 8.10s 2026-02-08 03:18:12.143598 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.86s 2026-02-08 03:18:12.530561 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.38s 2026-02-08 03:18:12.530648 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.34s 2026-02-08 03:18:12.530658 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.31s 2026-02-08 03:18:12.530666 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.98s 2026-02-08 03:18:12.530675 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.83s 2026-02-08 03:18:12.530682 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.10s 2026-02-08 03:18:12.530688 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.97s 2026-02-08 03:18:12.530695 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.69s 2026-02-08 03:18:12.530701 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.66s 2026-02-08 03:18:12.530707 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.63s 2026-02-08 03:18:12.530713 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.63s 2026-02-08 03:18:12.530721 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.62s 2026-02-08 03:18:12.530727 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.55s 2026-02-08 03:18:12.865615 | orchestrator | + osism apply copy-kubeconfig 2026-02-08 03:18:24.903702 | orchestrator | 2026-02-08 03:18:24 | INFO  | Task 54f0e6d5-650f-49a0-a088-84cece82f378 (copy-kubeconfig) was prepared for execution. 2026-02-08 03:18:24.903818 | orchestrator | 2026-02-08 03:18:24 | INFO  | It takes a moment until task 54f0e6d5-650f-49a0-a088-84cece82f378 (copy-kubeconfig) has been started and output is visible here. 2026-02-08 03:18:32.175423 | orchestrator | 2026-02-08 03:18:32.175538 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-08 03:18:32.175553 | orchestrator | 2026-02-08 03:18:32.175560 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-08 03:18:32.175569 | orchestrator | Sunday 08 February 2026 03:18:29 +0000 (0:00:00.174) 0:00:00.174 ******* 2026-02-08 03:18:32.175578 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-08 03:18:32.175587 | orchestrator | 2026-02-08 03:18:32.175606 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-08 03:18:32.175615 | orchestrator | Sunday 08 February 2026 03:18:30 +0000 (0:00:00.734) 0:00:00.909 ******* 2026-02-08 03:18:32.175623 | orchestrator | changed: [testbed-manager] 2026-02-08 03:18:32.175632 | orchestrator | 2026-02-08 03:18:32.175641 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-08 03:18:32.175647 | orchestrator | Sunday 08 February 2026 03:18:31 +0000 (0:00:01.335) 0:00:02.244 ******* 2026-02-08 03:18:32.175652 | orchestrator | changed: [testbed-manager] 2026-02-08 03:18:32.175657 | orchestrator | 2026-02-08 03:18:32.175662 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:18:32.175667 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:18:32.175673 | orchestrator | 2026-02-08 03:18:32.175678 | orchestrator | 2026-02-08 03:18:32.175685 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:18:32.175704 | orchestrator | Sunday 08 February 2026 03:18:31 +0000 (0:00:00.491) 0:00:02.735 ******* 2026-02-08 03:18:32.175709 | orchestrator | =============================================================================== 2026-02-08 03:18:32.175717 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.34s 2026-02-08 03:18:32.175722 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2026-02-08 03:18:32.175726 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.49s 2026-02-08 03:18:32.515722 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-08 03:18:44.672722 | orchestrator | 2026-02-08 03:18:44 | INFO  | Task 7ab1323f-d1ac-44ed-b639-a9a1c7048deb (openstackclient) was prepared for execution. 2026-02-08 03:18:44.672853 | orchestrator | 2026-02-08 03:18:44 | INFO  | It takes a moment until task 7ab1323f-d1ac-44ed-b639-a9a1c7048deb (openstackclient) has been started and output is visible here. 2026-02-08 03:19:32.550257 | orchestrator | 2026-02-08 03:19:32.550378 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-08 03:19:32.550395 | orchestrator | 2026-02-08 03:19:32.550410 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-08 03:19:32.550430 | orchestrator | Sunday 08 February 2026 03:18:49 +0000 (0:00:00.237) 0:00:00.237 ******* 2026-02-08 03:19:32.550451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-08 03:19:32.550471 | orchestrator | 2026-02-08 03:19:32.550488 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-08 03:19:32.550506 | orchestrator | Sunday 08 February 2026 03:18:49 +0000 (0:00:00.241) 0:00:00.479 ******* 2026-02-08 03:19:32.550525 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-08 03:19:32.550547 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-08 03:19:32.550567 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-08 03:19:32.550584 | orchestrator | 2026-02-08 03:19:32.550595 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-08 03:19:32.550634 | orchestrator | Sunday 08 February 2026 03:18:50 +0000 (0:00:01.277) 0:00:01.756 ******* 2026-02-08 03:19:32.550648 | orchestrator | changed: [testbed-manager] 2026-02-08 03:19:32.550660 | orchestrator | 2026-02-08 03:19:32.550674 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-08 03:19:32.550686 | orchestrator | Sunday 08 February 2026 03:18:52 +0000 (0:00:01.488) 0:00:03.245 ******* 2026-02-08 03:19:32.550698 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-08 03:19:32.550711 | orchestrator | ok: [testbed-manager] 2026-02-08 03:19:32.550725 | orchestrator | 2026-02-08 03:19:32.550737 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-08 03:19:32.550750 | orchestrator | Sunday 08 February 2026 03:19:27 +0000 (0:00:34.965) 0:00:38.210 ******* 2026-02-08 03:19:32.550762 | orchestrator | changed: [testbed-manager] 2026-02-08 03:19:32.550774 | orchestrator | 2026-02-08 03:19:32.550786 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-08 03:19:32.550798 | orchestrator | Sunday 08 February 2026 03:19:28 +0000 (0:00:00.960) 0:00:39.170 ******* 2026-02-08 03:19:32.550810 | orchestrator | ok: [testbed-manager] 2026-02-08 03:19:32.550823 | orchestrator | 2026-02-08 03:19:32.550835 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-08 03:19:32.550848 | orchestrator | Sunday 08 February 2026 03:19:28 +0000 (0:00:00.651) 0:00:39.821 ******* 2026-02-08 03:19:32.550861 | orchestrator | changed: [testbed-manager] 2026-02-08 03:19:32.550873 | orchestrator | 2026-02-08 03:19:32.550886 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-08 03:19:32.550899 | orchestrator | Sunday 08 February 2026 03:19:30 +0000 (0:00:01.580) 0:00:41.402 ******* 2026-02-08 03:19:32.550911 | orchestrator | changed: [testbed-manager] 2026-02-08 03:19:32.550924 | orchestrator | 2026-02-08 03:19:32.550937 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-08 03:19:32.550950 | orchestrator | Sunday 08 February 2026 03:19:31 +0000 (0:00:00.750) 0:00:42.152 ******* 2026-02-08 03:19:32.550962 | orchestrator | changed: [testbed-manager] 2026-02-08 03:19:32.550974 | orchestrator | 2026-02-08 03:19:32.550987 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-08 03:19:32.550999 | orchestrator | Sunday 08 February 2026 03:19:31 +0000 (0:00:00.593) 0:00:42.746 ******* 2026-02-08 03:19:32.551010 | orchestrator | ok: [testbed-manager] 2026-02-08 03:19:32.551020 | orchestrator | 2026-02-08 03:19:32.551031 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:19:32.551042 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:19:32.551054 | orchestrator | 2026-02-08 03:19:32.551065 | orchestrator | 2026-02-08 03:19:32.551075 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:19:32.551086 | orchestrator | Sunday 08 February 2026 03:19:32 +0000 (0:00:00.436) 0:00:43.182 ******* 2026-02-08 03:19:32.551096 | orchestrator | =============================================================================== 2026-02-08 03:19:32.551107 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.97s 2026-02-08 03:19:32.551117 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.58s 2026-02-08 03:19:32.551128 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.49s 2026-02-08 03:19:32.551165 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.28s 2026-02-08 03:19:32.551184 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.96s 2026-02-08 03:19:32.551201 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.75s 2026-02-08 03:19:32.551213 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.65s 2026-02-08 03:19:32.551223 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.59s 2026-02-08 03:19:32.551242 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2026-02-08 03:19:32.551253 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.24s 2026-02-08 03:19:35.003467 | orchestrator | 2026-02-08 03:19:35 | INFO  | Task 12cf118d-823d-4205-a701-0de70a77951f (common) was prepared for execution. 2026-02-08 03:19:35.003550 | orchestrator | 2026-02-08 03:19:35 | INFO  | It takes a moment until task 12cf118d-823d-4205-a701-0de70a77951f (common) has been started and output is visible here. 2026-02-08 03:19:47.575406 | orchestrator | 2026-02-08 03:19:47.575549 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-08 03:19:47.575576 | orchestrator | 2026-02-08 03:19:47.575596 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-08 03:19:47.575614 | orchestrator | Sunday 08 February 2026 03:19:39 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-02-08 03:19:47.575634 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:19:47.575656 | orchestrator | 2026-02-08 03:19:47.575675 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-08 03:19:47.575694 | orchestrator | Sunday 08 February 2026 03:19:40 +0000 (0:00:01.345) 0:00:01.637 ******* 2026-02-08 03:19:47.575711 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 03:19:47.575729 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 03:19:47.575750 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 03:19:47.575770 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 03:19:47.575788 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 03:19:47.575806 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 03:19:47.575823 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 03:19:47.575842 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 03:19:47.575860 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 03:19:47.575878 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 03:19:47.575896 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 03:19:47.575915 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 03:19:47.575933 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 03:19:47.575953 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 03:19:47.575971 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 03:19:47.576015 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 03:19:47.576037 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 03:19:47.576057 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 03:19:47.576076 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 03:19:47.576094 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 03:19:47.576113 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 03:19:47.576131 | orchestrator | 2026-02-08 03:19:47.576185 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-08 03:19:47.576239 | orchestrator | Sunday 08 February 2026 03:19:43 +0000 (0:00:02.696) 0:00:04.334 ******* 2026-02-08 03:19:47.576262 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:19:47.576282 | orchestrator | 2026-02-08 03:19:47.576300 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-08 03:19:47.576319 | orchestrator | Sunday 08 February 2026 03:19:44 +0000 (0:00:01.411) 0:00:05.745 ******* 2026-02-08 03:19:47.576350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:47.576375 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:47.576432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:47.576454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:47.576473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:47.576492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:47.576508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:47.576531 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:47.576543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:47.576577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600715 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:48.600945 | orchestrator | 2026-02-08 03:19:48.600968 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-08 03:19:48.600988 | orchestrator | Sunday 08 February 2026 03:19:48 +0000 (0:00:03.465) 0:00:09.210 ******* 2026-02-08 03:19:48.601013 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:48.601034 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:48.601055 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:48.601075 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:19:48.601097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:48.601168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228789 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:19:49.228802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:49.228865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228881 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:19:49.228888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:49.228895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228918 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:19:49.228938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:49.228946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228965 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:19:49.228972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:49.228979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:49.228992 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:19:49.228999 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:49.229010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111541 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:19:50.111559 | orchestrator | 2026-02-08 03:19:50.111569 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-08 03:19:50.111580 | orchestrator | Sunday 08 February 2026 03:19:49 +0000 (0:00:00.974) 0:00:10.184 ******* 2026-02-08 03:19:50.111590 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:50.111600 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111609 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111618 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:19:50.111627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:50.111654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111673 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:19:50.111717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:50.111728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111746 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:19:50.111755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:50.111765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:50.111783 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:19:50.111797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:50.111829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:55.259318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:55.259452 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:19:55.259473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:55.259488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:55.259501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:55.259513 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:19:55.259525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 03:19:55.259538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:55.259577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:19:55.259589 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:19:55.259600 | orchestrator | 2026-02-08 03:19:55.259612 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-08 03:19:55.259624 | orchestrator | Sunday 08 February 2026 03:19:51 +0000 (0:00:01.788) 0:00:11.973 ******* 2026-02-08 03:19:55.259635 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:19:55.259646 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:19:55.259657 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:19:55.259668 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:19:55.259698 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:19:55.259710 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:19:55.259720 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:19:55.259731 | orchestrator | 2026-02-08 03:19:55.259742 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-08 03:19:55.259753 | orchestrator | Sunday 08 February 2026 03:19:51 +0000 (0:00:00.702) 0:00:12.676 ******* 2026-02-08 03:19:55.259767 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:19:55.259780 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:19:55.259792 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:19:55.259804 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:19:55.259816 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:19:55.259829 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:19:55.259842 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:19:55.259855 | orchestrator | 2026-02-08 03:19:55.259868 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-08 03:19:55.259882 | orchestrator | Sunday 08 February 2026 03:19:52 +0000 (0:00:00.860) 0:00:13.536 ******* 2026-02-08 03:19:55.259902 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:55.259923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:55.259962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:55.259982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:55.260034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:55.260059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:55.260101 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:19:57.936292 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936482 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936523 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:19:57.936537 | orchestrator | 2026-02-08 03:19:57.936546 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-08 03:19:57.936554 | orchestrator | Sunday 08 February 2026 03:19:55 +0000 (0:00:03.381) 0:00:16.917 ******* 2026-02-08 03:19:57.936561 | orchestrator | [WARNING]: Skipped 2026-02-08 03:19:57.936568 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-08 03:19:57.936575 | orchestrator | to this access issue: 2026-02-08 03:19:57.936582 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-08 03:19:57.936589 | orchestrator | directory 2026-02-08 03:19:57.936596 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 03:19:57.936604 | orchestrator | 2026-02-08 03:19:57.936611 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-08 03:19:57.936618 | orchestrator | Sunday 08 February 2026 03:19:56 +0000 (0:00:01.017) 0:00:17.935 ******* 2026-02-08 03:19:57.936624 | orchestrator | [WARNING]: Skipped 2026-02-08 03:19:57.936635 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-08 03:20:07.709786 | orchestrator | to this access issue: 2026-02-08 03:20:07.709865 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-08 03:20:07.709872 | orchestrator | directory 2026-02-08 03:20:07.709878 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 03:20:07.709883 | orchestrator | 2026-02-08 03:20:07.709888 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-08 03:20:07.709893 | orchestrator | Sunday 08 February 2026 03:19:58 +0000 (0:00:01.268) 0:00:19.204 ******* 2026-02-08 03:20:07.709897 | orchestrator | [WARNING]: Skipped 2026-02-08 03:20:07.709901 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-08 03:20:07.709905 | orchestrator | to this access issue: 2026-02-08 03:20:07.709909 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-08 03:20:07.709913 | orchestrator | directory 2026-02-08 03:20:07.709917 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 03:20:07.709920 | orchestrator | 2026-02-08 03:20:07.709924 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-08 03:20:07.709944 | orchestrator | Sunday 08 February 2026 03:19:59 +0000 (0:00:00.855) 0:00:20.060 ******* 2026-02-08 03:20:07.709948 | orchestrator | [WARNING]: Skipped 2026-02-08 03:20:07.709952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-08 03:20:07.709955 | orchestrator | to this access issue: 2026-02-08 03:20:07.709959 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-08 03:20:07.709963 | orchestrator | directory 2026-02-08 03:20:07.709967 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 03:20:07.709971 | orchestrator | 2026-02-08 03:20:07.709974 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-08 03:20:07.709978 | orchestrator | Sunday 08 February 2026 03:19:59 +0000 (0:00:00.893) 0:00:20.954 ******* 2026-02-08 03:20:07.709982 | orchestrator | changed: [testbed-manager] 2026-02-08 03:20:07.709986 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:20:07.709990 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:20:07.709994 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:20:07.709997 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:20:07.710001 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:20:07.710005 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:20:07.710009 | orchestrator | 2026-02-08 03:20:07.710049 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-08 03:20:07.710054 | orchestrator | Sunday 08 February 2026 03:20:02 +0000 (0:00:02.495) 0:00:23.449 ******* 2026-02-08 03:20:07.710057 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 03:20:07.710070 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 03:20:07.710074 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 03:20:07.710089 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 03:20:07.710094 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 03:20:07.710098 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 03:20:07.710101 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 03:20:07.710105 | orchestrator | 2026-02-08 03:20:07.710109 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-08 03:20:07.710113 | orchestrator | Sunday 08 February 2026 03:20:04 +0000 (0:00:02.001) 0:00:25.450 ******* 2026-02-08 03:20:07.710117 | orchestrator | changed: [testbed-manager] 2026-02-08 03:20:07.710121 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:20:07.710125 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:20:07.710129 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:20:07.710152 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:20:07.710156 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:20:07.710160 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:20:07.710163 | orchestrator | 2026-02-08 03:20:07.710170 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-08 03:20:07.710174 | orchestrator | Sunday 08 February 2026 03:20:06 +0000 (0:00:01.945) 0:00:27.396 ******* 2026-02-08 03:20:07.710179 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:07.710200 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:20:07.710206 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:07.710210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:20:07.710214 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:07.710218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:20:07.710224 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:07.710228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:20:07.710241 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:07.710250 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:13.667005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:20:13.667117 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:13.667222 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:13.667258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:20:13.667294 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:13.667304 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:13.667334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:20:13.667361 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:13.667370 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:13.667378 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:13.667386 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:13.667395 | orchestrator | 2026-02-08 03:20:13.667404 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-08 03:20:13.667413 | orchestrator | Sunday 08 February 2026 03:20:07 +0000 (0:00:01.503) 0:00:28.900 ******* 2026-02-08 03:20:13.667422 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 03:20:13.667430 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 03:20:13.667438 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 03:20:13.667446 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 03:20:13.667454 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 03:20:13.667461 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 03:20:13.667469 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 03:20:13.667493 | orchestrator | 2026-02-08 03:20:13.667501 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-08 03:20:13.667509 | orchestrator | Sunday 08 February 2026 03:20:09 +0000 (0:00:01.962) 0:00:30.863 ******* 2026-02-08 03:20:13.667519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 03:20:13.667529 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 03:20:13.667538 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 03:20:13.667548 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 03:20:13.667557 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 03:20:13.667566 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 03:20:13.667586 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 03:20:13.667596 | orchestrator | 2026-02-08 03:20:13.667606 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-08 03:20:13.667615 | orchestrator | Sunday 08 February 2026 03:20:11 +0000 (0:00:01.741) 0:00:32.605 ******* 2026-02-08 03:20:13.667633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:13.667650 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:14.297671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:14.297764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:14.297776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:14.297808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:14.297831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 03:20:14.297840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:14.297850 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:14.297875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:14.297884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:14.297892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:14.297908 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:14.297916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:14.297924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:14.297933 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:20:14.297948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:21:36.488025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:21:36.488174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:21:36.488194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:21:36.488231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:21:36.488243 | orchestrator | 2026-02-08 03:21:36.488255 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-08 03:21:36.488266 | orchestrator | Sunday 08 February 2026 03:20:14 +0000 (0:00:02.651) 0:00:35.257 ******* 2026-02-08 03:21:36.488276 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:21:36.488287 | orchestrator | changed: [testbed-manager] 2026-02-08 03:21:36.488297 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:21:36.488321 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:21:36.488350 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:21:36.488360 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:21:36.488369 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:21:36.488379 | orchestrator | 2026-02-08 03:21:36.488389 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-08 03:21:36.488399 | orchestrator | Sunday 08 February 2026 03:20:15 +0000 (0:00:01.413) 0:00:36.670 ******* 2026-02-08 03:21:36.488409 | orchestrator | changed: [testbed-manager] 2026-02-08 03:21:36.488418 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:21:36.488428 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:21:36.488437 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:21:36.488447 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:21:36.488456 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:21:36.488465 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:21:36.488474 | orchestrator | 2026-02-08 03:21:36.488484 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 03:21:36.488494 | orchestrator | Sunday 08 February 2026 03:20:16 +0000 (0:00:01.019) 0:00:37.690 ******* 2026-02-08 03:21:36.488503 | orchestrator | 2026-02-08 03:21:36.488513 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 03:21:36.488522 | orchestrator | Sunday 08 February 2026 03:20:16 +0000 (0:00:00.111) 0:00:37.801 ******* 2026-02-08 03:21:36.488532 | orchestrator | 2026-02-08 03:21:36.488541 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 03:21:36.488562 | orchestrator | Sunday 08 February 2026 03:20:16 +0000 (0:00:00.077) 0:00:37.879 ******* 2026-02-08 03:21:36.488572 | orchestrator | 2026-02-08 03:21:36.488581 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 03:21:36.488591 | orchestrator | Sunday 08 February 2026 03:20:16 +0000 (0:00:00.066) 0:00:37.946 ******* 2026-02-08 03:21:36.488600 | orchestrator | 2026-02-08 03:21:36.488610 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 03:21:36.488619 | orchestrator | Sunday 08 February 2026 03:20:17 +0000 (0:00:00.235) 0:00:38.181 ******* 2026-02-08 03:21:36.488629 | orchestrator | 2026-02-08 03:21:36.488638 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 03:21:36.488648 | orchestrator | Sunday 08 February 2026 03:20:17 +0000 (0:00:00.066) 0:00:38.248 ******* 2026-02-08 03:21:36.488657 | orchestrator | 2026-02-08 03:21:36.488680 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 03:21:36.488690 | orchestrator | Sunday 08 February 2026 03:20:17 +0000 (0:00:00.080) 0:00:38.328 ******* 2026-02-08 03:21:36.488700 | orchestrator | 2026-02-08 03:21:36.488717 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-08 03:21:36.488727 | orchestrator | Sunday 08 February 2026 03:20:17 +0000 (0:00:00.090) 0:00:38.419 ******* 2026-02-08 03:21:36.488736 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:21:36.488746 | orchestrator | changed: [testbed-manager] 2026-02-08 03:21:36.488756 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:21:36.488765 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:21:36.488775 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:21:36.488803 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:21:36.488821 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:21:36.488837 | orchestrator | 2026-02-08 03:21:36.488853 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-08 03:21:36.488868 | orchestrator | Sunday 08 February 2026 03:20:55 +0000 (0:00:37.557) 0:01:15.976 ******* 2026-02-08 03:21:36.488884 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:21:36.488899 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:21:36.488914 | orchestrator | changed: [testbed-manager] 2026-02-08 03:21:36.488929 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:21:36.488944 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:21:36.488959 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:21:36.488975 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:21:36.488990 | orchestrator | 2026-02-08 03:21:36.489006 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-08 03:21:36.489023 | orchestrator | Sunday 08 February 2026 03:21:25 +0000 (0:00:30.440) 0:01:46.417 ******* 2026-02-08 03:21:36.489041 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:21:36.489059 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:21:36.489074 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:21:36.489084 | orchestrator | ok: [testbed-manager] 2026-02-08 03:21:36.489093 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:21:36.489102 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:21:36.489112 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:21:36.489150 | orchestrator | 2026-02-08 03:21:36.489170 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-08 03:21:36.489186 | orchestrator | Sunday 08 February 2026 03:21:27 +0000 (0:00:02.095) 0:01:48.512 ******* 2026-02-08 03:21:36.489200 | orchestrator | changed: [testbed-manager] 2026-02-08 03:21:36.489209 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:21:36.489219 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:21:36.489229 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:21:36.489238 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:21:36.489247 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:21:36.489257 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:21:36.489266 | orchestrator | 2026-02-08 03:21:36.489276 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:21:36.489287 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 03:21:36.489298 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 03:21:36.489308 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 03:21:36.489318 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 03:21:36.489328 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 03:21:36.489337 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 03:21:36.489355 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 03:21:36.489365 | orchestrator | 2026-02-08 03:21:36.489374 | orchestrator | 2026-02-08 03:21:36.489384 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:21:36.489395 | orchestrator | Sunday 08 February 2026 03:21:36 +0000 (0:00:08.902) 0:01:57.415 ******* 2026-02-08 03:21:36.489406 | orchestrator | =============================================================================== 2026-02-08 03:21:36.489427 | orchestrator | common : Restart fluentd container ------------------------------------- 37.56s 2026-02-08 03:21:36.489438 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 30.44s 2026-02-08 03:21:36.489449 | orchestrator | common : Restart cron container ----------------------------------------- 8.90s 2026-02-08 03:21:36.489460 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.47s 2026-02-08 03:21:36.489471 | orchestrator | common : Copying over config.json files for services -------------------- 3.38s 2026-02-08 03:21:36.489482 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.70s 2026-02-08 03:21:36.489492 | orchestrator | common : Check common containers ---------------------------------------- 2.65s 2026-02-08 03:21:36.489503 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.50s 2026-02-08 03:21:36.489514 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.10s 2026-02-08 03:21:36.489524 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.00s 2026-02-08 03:21:36.489535 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.96s 2026-02-08 03:21:36.489546 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.95s 2026-02-08 03:21:36.489557 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.79s 2026-02-08 03:21:36.489567 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.74s 2026-02-08 03:21:36.489582 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.50s 2026-02-08 03:21:36.489600 | orchestrator | common : Creating log volume -------------------------------------------- 1.41s 2026-02-08 03:21:36.489629 | orchestrator | common : include_tasks -------------------------------------------------- 1.41s 2026-02-08 03:21:36.922153 | orchestrator | common : include_tasks -------------------------------------------------- 1.35s 2026-02-08 03:21:36.922274 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.27s 2026-02-08 03:21:36.922300 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.02s 2026-02-08 03:21:39.651257 | orchestrator | 2026-02-08 03:21:39 | INFO  | Task 06747b89-4537-4649-a73c-f77bd343de09 (loadbalancer) was prepared for execution. 2026-02-08 03:21:39.651346 | orchestrator | 2026-02-08 03:21:39 | INFO  | It takes a moment until task 06747b89-4537-4649-a73c-f77bd343de09 (loadbalancer) has been started and output is visible here. 2026-02-08 03:21:55.193767 | orchestrator | 2026-02-08 03:21:55.193862 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 03:21:55.193873 | orchestrator | 2026-02-08 03:21:55.193881 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 03:21:55.193889 | orchestrator | Sunday 08 February 2026 03:21:44 +0000 (0:00:00.302) 0:00:00.302 ******* 2026-02-08 03:21:55.193896 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:21:55.193904 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:21:55.193911 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:21:55.193918 | orchestrator | 2026-02-08 03:21:55.193924 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 03:21:55.193931 | orchestrator | Sunday 08 February 2026 03:21:44 +0000 (0:00:00.314) 0:00:00.617 ******* 2026-02-08 03:21:55.193939 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-08 03:21:55.193945 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-08 03:21:55.193982 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-08 03:21:55.193990 | orchestrator | 2026-02-08 03:21:55.193996 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-08 03:21:55.194002 | orchestrator | 2026-02-08 03:21:55.194009 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-08 03:21:55.194062 | orchestrator | Sunday 08 February 2026 03:21:45 +0000 (0:00:00.469) 0:00:01.086 ******* 2026-02-08 03:21:55.194071 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:21:55.194079 | orchestrator | 2026-02-08 03:21:55.194086 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-08 03:21:55.194092 | orchestrator | Sunday 08 February 2026 03:21:45 +0000 (0:00:00.601) 0:00:01.687 ******* 2026-02-08 03:21:55.194099 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:21:55.194105 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:21:55.194111 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:21:55.194159 | orchestrator | 2026-02-08 03:21:55.194167 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-08 03:21:55.194187 | orchestrator | Sunday 08 February 2026 03:21:46 +0000 (0:00:00.600) 0:00:02.288 ******* 2026-02-08 03:21:55.194193 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:21:55.194200 | orchestrator | 2026-02-08 03:21:55.194206 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-08 03:21:55.194212 | orchestrator | Sunday 08 February 2026 03:21:47 +0000 (0:00:00.778) 0:00:03.066 ******* 2026-02-08 03:21:55.194219 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:21:55.194225 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:21:55.194231 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:21:55.194237 | orchestrator | 2026-02-08 03:21:55.194243 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-08 03:21:55.194249 | orchestrator | Sunday 08 February 2026 03:21:47 +0000 (0:00:00.573) 0:00:03.640 ******* 2026-02-08 03:21:55.194255 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-08 03:21:55.194261 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-08 03:21:55.194266 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-08 03:21:55.194273 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-08 03:21:55.194279 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-08 03:21:55.194287 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-08 03:21:55.194293 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-08 03:21:55.194298 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-08 03:21:55.194306 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-08 03:21:55.194313 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-08 03:21:55.194320 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-08 03:21:55.194327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-08 03:21:55.194333 | orchestrator | 2026-02-08 03:21:55.194340 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-08 03:21:55.194345 | orchestrator | Sunday 08 February 2026 03:21:50 +0000 (0:00:02.999) 0:00:06.639 ******* 2026-02-08 03:21:55.194349 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-08 03:21:55.194354 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-08 03:21:55.194365 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-08 03:21:55.194369 | orchestrator | 2026-02-08 03:21:55.194373 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-08 03:21:55.194378 | orchestrator | Sunday 08 February 2026 03:21:51 +0000 (0:00:00.723) 0:00:07.362 ******* 2026-02-08 03:21:55.194382 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-08 03:21:55.194387 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-08 03:21:55.194391 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-08 03:21:55.194396 | orchestrator | 2026-02-08 03:21:55.194401 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-08 03:21:55.194405 | orchestrator | Sunday 08 February 2026 03:21:52 +0000 (0:00:01.282) 0:00:08.644 ******* 2026-02-08 03:21:55.194410 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-08 03:21:55.194414 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:21:55.194432 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-08 03:21:55.194436 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:21:55.194441 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-08 03:21:55.194445 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:21:55.194449 | orchestrator | 2026-02-08 03:21:55.194453 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-08 03:21:55.194456 | orchestrator | Sunday 08 February 2026 03:21:53 +0000 (0:00:00.584) 0:00:09.229 ******* 2026-02-08 03:21:55.194462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 03:21:55.194473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 03:21:55.194478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 03:21:55.194481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:21:55.194490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:21:55.194498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:00.485691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:00.485771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:00.485785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:00.485791 | orchestrator | 2026-02-08 03:22:00.485797 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-08 03:22:00.485804 | orchestrator | Sunday 08 February 2026 03:21:55 +0000 (0:00:01.831) 0:00:11.060 ******* 2026-02-08 03:22:00.485809 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:22:00.485816 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:22:00.485821 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:22:00.485826 | orchestrator | 2026-02-08 03:22:00.485831 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-08 03:22:00.485836 | orchestrator | Sunday 08 February 2026 03:21:56 +0000 (0:00:00.920) 0:00:11.981 ******* 2026-02-08 03:22:00.485841 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-08 03:22:00.485847 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-08 03:22:00.485852 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-08 03:22:00.485874 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-08 03:22:00.485879 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-08 03:22:00.485884 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-08 03:22:00.485889 | orchestrator | 2026-02-08 03:22:00.485894 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-08 03:22:00.485899 | orchestrator | Sunday 08 February 2026 03:21:57 +0000 (0:00:01.425) 0:00:13.406 ******* 2026-02-08 03:22:00.485904 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:22:00.485909 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:22:00.485914 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:22:00.485919 | orchestrator | 2026-02-08 03:22:00.485924 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-08 03:22:00.485929 | orchestrator | Sunday 08 February 2026 03:21:58 +0000 (0:00:00.887) 0:00:14.294 ******* 2026-02-08 03:22:00.485933 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:22:00.485939 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:22:00.485943 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:22:00.485948 | orchestrator | 2026-02-08 03:22:00.485953 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-08 03:22:00.485958 | orchestrator | Sunday 08 February 2026 03:21:59 +0000 (0:00:01.446) 0:00:15.741 ******* 2026-02-08 03:22:00.485963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 03:22:00.485981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:00.485986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:00.485992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 03:22:00.485998 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:00.486008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 03:22:00.486055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:00.486062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:00.486093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 03:22:00.486099 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:00.486109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 03:22:03.485790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:03.485962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:03.485991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 03:22:03.486004 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:03.486061 | orchestrator | 2026-02-08 03:22:03.486075 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-08 03:22:03.486087 | orchestrator | Sunday 08 February 2026 03:22:00 +0000 (0:00:00.609) 0:00:16.350 ******* 2026-02-08 03:22:03.486097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:03.486109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:03.486178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:03.486211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:03.486255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:03.486267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 03:22:03.486277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:03.486288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:03.486298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 03:22:03.486327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:11.757679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:11.757821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58', '__omit_place_holder__48183c94114ab547499cd9423714856fec5a9e58'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 03:22:11.757849 | orchestrator | 2026-02-08 03:22:11.757871 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-08 03:22:11.757884 | orchestrator | Sunday 08 February 2026 03:22:03 +0000 (0:00:03.005) 0:00:19.355 ******* 2026-02-08 03:22:11.757896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:11.757940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:11.757952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:11.757964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:11.758147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:11.758183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:11.758202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:11.758223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:11.758243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:11.758266 | orchestrator | 2026-02-08 03:22:11.758283 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-08 03:22:11.758300 | orchestrator | Sunday 08 February 2026 03:22:06 +0000 (0:00:03.007) 0:00:22.363 ******* 2026-02-08 03:22:11.758320 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-08 03:22:11.758343 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-08 03:22:11.758363 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-08 03:22:11.758382 | orchestrator | 2026-02-08 03:22:11.758399 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-08 03:22:11.758412 | orchestrator | Sunday 08 February 2026 03:22:08 +0000 (0:00:01.862) 0:00:24.225 ******* 2026-02-08 03:22:11.758436 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-08 03:22:11.758447 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-08 03:22:11.758458 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-08 03:22:11.758469 | orchestrator | 2026-02-08 03:22:11.758479 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-08 03:22:11.758490 | orchestrator | Sunday 08 February 2026 03:22:11 +0000 (0:00:02.830) 0:00:27.055 ******* 2026-02-08 03:22:11.758501 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:11.758514 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:11.758525 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:11.758535 | orchestrator | 2026-02-08 03:22:11.758558 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-08 03:22:23.208707 | orchestrator | Sunday 08 February 2026 03:22:11 +0000 (0:00:00.575) 0:00:27.631 ******* 2026-02-08 03:22:23.208808 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-08 03:22:23.208837 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-08 03:22:23.208849 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-08 03:22:23.208860 | orchestrator | 2026-02-08 03:22:23.208870 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-08 03:22:23.208881 | orchestrator | Sunday 08 February 2026 03:22:13 +0000 (0:00:02.140) 0:00:29.772 ******* 2026-02-08 03:22:23.208891 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-08 03:22:23.208902 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-08 03:22:23.208912 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-08 03:22:23.208921 | orchestrator | 2026-02-08 03:22:23.208937 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-08 03:22:23.208953 | orchestrator | Sunday 08 February 2026 03:22:16 +0000 (0:00:02.133) 0:00:31.905 ******* 2026-02-08 03:22:23.208982 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-08 03:22:23.209000 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-08 03:22:23.209016 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-08 03:22:23.209033 | orchestrator | 2026-02-08 03:22:23.209050 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-08 03:22:23.209067 | orchestrator | Sunday 08 February 2026 03:22:17 +0000 (0:00:01.392) 0:00:33.297 ******* 2026-02-08 03:22:23.209082 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-08 03:22:23.209093 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-08 03:22:23.209103 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-08 03:22:23.209231 | orchestrator | 2026-02-08 03:22:23.209246 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-08 03:22:23.209258 | orchestrator | Sunday 08 February 2026 03:22:18 +0000 (0:00:01.369) 0:00:34.666 ******* 2026-02-08 03:22:23.209269 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:22:23.209282 | orchestrator | 2026-02-08 03:22:23.209293 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-08 03:22:23.209304 | orchestrator | Sunday 08 February 2026 03:22:19 +0000 (0:00:00.566) 0:00:35.233 ******* 2026-02-08 03:22:23.209333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:23.209373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:23.209386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:23.209423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:23.209437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:23.209449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:23.209460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:23.209480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:23.209492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:23.209504 | orchestrator | 2026-02-08 03:22:23.209531 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-08 03:22:23.209543 | orchestrator | Sunday 08 February 2026 03:22:22 +0000 (0:00:03.247) 0:00:38.480 ******* 2026-02-08 03:22:23.209574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 03:22:24.062476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:24.062608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:24.062639 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:24.062664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 03:22:24.062704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:24.062716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:24.062727 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:24.062739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 03:22:24.062778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:24.062791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:24.062802 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:24.062813 | orchestrator | 2026-02-08 03:22:24.062827 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-08 03:22:24.062839 | orchestrator | Sunday 08 February 2026 03:22:23 +0000 (0:00:00.600) 0:00:39.081 ******* 2026-02-08 03:22:24.062852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 03:22:24.062871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:24.062883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:24.062894 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:24.062905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 03:22:24.062924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:24.934185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:24.934288 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:24.934306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 03:22:24.934356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:24.934370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:24.934381 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:24.934392 | orchestrator | 2026-02-08 03:22:24.934404 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-08 03:22:24.934416 | orchestrator | Sunday 08 February 2026 03:22:24 +0000 (0:00:00.851) 0:00:39.932 ******* 2026-02-08 03:22:24.934427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 03:22:24.934439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:24.934476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:24.934489 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:24.934501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 03:22:24.934551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:24.934566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:24.934579 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:24.934592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 03:22:24.934605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:24.934618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:24.934638 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:26.366087 | orchestrator | 2026-02-08 03:22:26.366173 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-08 03:22:26.366182 | orchestrator | Sunday 08 February 2026 03:22:24 +0000 (0:00:00.864) 0:00:40.797 ******* 2026-02-08 03:22:26.366204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 03:22:26.366230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:26.366236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:26.366242 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:26.366248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 03:22:26.366253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:26.366258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:26.366263 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:26.366281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 03:22:26.366291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:26.366296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:26.366301 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:26.366305 | orchestrator | 2026-02-08 03:22:26.366310 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-08 03:22:26.366315 | orchestrator | Sunday 08 February 2026 03:22:25 +0000 (0:00:00.595) 0:00:41.392 ******* 2026-02-08 03:22:26.366320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 03:22:26.366324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:26.366329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:26.366334 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:26.366346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 03:22:27.579669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:27.579783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:27.579802 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:27.579819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 03:22:27.579834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:27.579848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:27.579861 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:27.579875 | orchestrator | 2026-02-08 03:22:27.579890 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-08 03:22:27.579906 | orchestrator | Sunday 08 February 2026 03:22:26 +0000 (0:00:00.848) 0:00:42.241 ******* 2026-02-08 03:22:27.579942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 03:22:27.579981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:27.579991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:27.579999 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:27.580007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 03:22:27.580016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:27.580024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:27.580032 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:27.580040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 03:22:27.580063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:29.006283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:29.006354 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:29.006362 | orchestrator | 2026-02-08 03:22:29.006367 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-08 03:22:29.006373 | orchestrator | Sunday 08 February 2026 03:22:27 +0000 (0:00:01.204) 0:00:43.446 ******* 2026-02-08 03:22:29.006379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 03:22:29.006385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:29.006389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:29.006413 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:29.006417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 03:22:29.006432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:29.006447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:29.006451 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:29.006456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 03:22:29.006460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:29.006464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:29.006468 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:29.006472 | orchestrator | 2026-02-08 03:22:29.006476 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-08 03:22:29.006487 | orchestrator | Sunday 08 February 2026 03:22:28 +0000 (0:00:00.595) 0:00:44.041 ******* 2026-02-08 03:22:29.006491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 03:22:29.006495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:29.006504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:35.431712 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:35.431823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 03:22:35.431860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:35.431873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:35.431908 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:35.431919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 03:22:35.431930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 03:22:35.431945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 03:22:35.431955 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:35.431965 | orchestrator | 2026-02-08 03:22:35.431976 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-08 03:22:35.431987 | orchestrator | Sunday 08 February 2026 03:22:28 +0000 (0:00:00.835) 0:00:44.877 ******* 2026-02-08 03:22:35.431997 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-08 03:22:35.432024 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-08 03:22:35.432035 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-08 03:22:35.432044 | orchestrator | 2026-02-08 03:22:35.432054 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-08 03:22:35.432064 | orchestrator | Sunday 08 February 2026 03:22:30 +0000 (0:00:01.629) 0:00:46.506 ******* 2026-02-08 03:22:35.432074 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-08 03:22:35.432084 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-08 03:22:35.432094 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-08 03:22:35.432104 | orchestrator | 2026-02-08 03:22:35.432145 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-08 03:22:35.432163 | orchestrator | Sunday 08 February 2026 03:22:32 +0000 (0:00:01.757) 0:00:48.264 ******* 2026-02-08 03:22:35.432174 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-08 03:22:35.432184 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-08 03:22:35.432193 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-08 03:22:35.432203 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-08 03:22:35.432220 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:35.432233 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-08 03:22:35.432244 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:35.432255 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-08 03:22:35.432266 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:35.432277 | orchestrator | 2026-02-08 03:22:35.432288 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-08 03:22:35.432300 | orchestrator | Sunday 08 February 2026 03:22:33 +0000 (0:00:00.793) 0:00:49.057 ******* 2026-02-08 03:22:35.432317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:35.432335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:35.432359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 03:22:35.432392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:39.581683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:39.581778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 03:22:39.581786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:39.581792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:39.581797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 03:22:39.581802 | orchestrator | 2026-02-08 03:22:39.581808 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-08 03:22:39.581814 | orchestrator | Sunday 08 February 2026 03:22:35 +0000 (0:00:02.243) 0:00:51.301 ******* 2026-02-08 03:22:39.581819 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:22:39.581824 | orchestrator | 2026-02-08 03:22:39.581829 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-08 03:22:39.581833 | orchestrator | Sunday 08 February 2026 03:22:36 +0000 (0:00:00.862) 0:00:52.164 ******* 2026-02-08 03:22:39.581872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 03:22:39.581886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 03:22:39.581896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:39.581901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 03:22:39.581906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 03:22:39.581911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 03:22:39.581918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:39.581929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 03:22:40.224824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 03:22:40.224925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 03:22:40.224941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:40.224954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 03:22:40.224966 | orchestrator | 2026-02-08 03:22:40.224979 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-08 03:22:40.224991 | orchestrator | Sunday 08 February 2026 03:22:39 +0000 (0:00:03.282) 0:00:55.447 ******* 2026-02-08 03:22:40.225020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 03:22:40.225074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 03:22:40.225088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:40.225100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 03:22:40.225197 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:40.225214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 03:22:40.225226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 03:22:40.225244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:40.225266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 03:22:40.225277 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:40.225300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 03:22:48.658703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 03:22:48.658792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:48.658801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 03:22:48.658808 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:48.658816 | orchestrator | 2026-02-08 03:22:48.658822 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-08 03:22:48.658829 | orchestrator | Sunday 08 February 2026 03:22:40 +0000 (0:00:00.651) 0:00:56.098 ******* 2026-02-08 03:22:48.658835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-08 03:22:48.658862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-08 03:22:48.658870 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:48.658876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-08 03:22:48.658881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-08 03:22:48.658887 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:48.658892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-08 03:22:48.658897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-08 03:22:48.658903 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:48.658908 | orchestrator | 2026-02-08 03:22:48.658914 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-08 03:22:48.658919 | orchestrator | Sunday 08 February 2026 03:22:41 +0000 (0:00:01.105) 0:00:57.204 ******* 2026-02-08 03:22:48.658924 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:22:48.658930 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:22:48.658935 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:22:48.658940 | orchestrator | 2026-02-08 03:22:48.658946 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-08 03:22:48.658951 | orchestrator | Sunday 08 February 2026 03:22:42 +0000 (0:00:01.309) 0:00:58.513 ******* 2026-02-08 03:22:48.658957 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:22:48.658962 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:22:48.658968 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:22:48.658973 | orchestrator | 2026-02-08 03:22:48.658991 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-08 03:22:48.658996 | orchestrator | Sunday 08 February 2026 03:22:44 +0000 (0:00:02.021) 0:01:00.535 ******* 2026-02-08 03:22:48.659002 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:22:48.659007 | orchestrator | 2026-02-08 03:22:48.659023 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-08 03:22:48.659029 | orchestrator | Sunday 08 February 2026 03:22:45 +0000 (0:00:00.660) 0:01:01.195 ******* 2026-02-08 03:22:48.659036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 03:22:48.659044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:48.659059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:22:48.659065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 03:22:48.659071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:48.659082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:22:49.392942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 03:22:49.393107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:49.393232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:22:49.393267 | orchestrator | 2026-02-08 03:22:49.393302 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-08 03:22:49.393338 | orchestrator | Sunday 08 February 2026 03:22:48 +0000 (0:00:03.334) 0:01:04.529 ******* 2026-02-08 03:22:49.393373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 03:22:49.393408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:49.393468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:22:49.393520 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:49.393558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 03:22:49.393604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:49.393641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:22:49.393675 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:49.393702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 03:22:49.393743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 03:22:59.050797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:22:59.201278 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:59.201361 | orchestrator | 2026-02-08 03:22:59.201382 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-08 03:22:59.201405 | orchestrator | Sunday 08 February 2026 03:22:49 +0000 (0:00:00.734) 0:01:05.264 ******* 2026-02-08 03:22:59.201425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-08 03:22:59.201450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-08 03:22:59.201474 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:59.201548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-08 03:22:59.201572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-08 03:22:59.201593 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:59.201614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-08 03:22:59.201634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-08 03:22:59.201654 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:59.201672 | orchestrator | 2026-02-08 03:22:59.201693 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-08 03:22:59.201712 | orchestrator | Sunday 08 February 2026 03:22:50 +0000 (0:00:00.873) 0:01:06.137 ******* 2026-02-08 03:22:59.201733 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:22:59.201754 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:22:59.201774 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:22:59.201796 | orchestrator | 2026-02-08 03:22:59.201819 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-08 03:22:59.201841 | orchestrator | Sunday 08 February 2026 03:22:51 +0000 (0:00:01.534) 0:01:07.672 ******* 2026-02-08 03:22:59.201861 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:22:59.201882 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:22:59.201904 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:22:59.201925 | orchestrator | 2026-02-08 03:22:59.201937 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-08 03:22:59.201949 | orchestrator | Sunday 08 February 2026 03:22:53 +0000 (0:00:01.983) 0:01:09.655 ******* 2026-02-08 03:22:59.201960 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:59.201971 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:59.201982 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:22:59.202082 | orchestrator | 2026-02-08 03:22:59.202096 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-08 03:22:59.202132 | orchestrator | Sunday 08 February 2026 03:22:54 +0000 (0:00:00.349) 0:01:10.005 ******* 2026-02-08 03:22:59.202154 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:22:59.202166 | orchestrator | 2026-02-08 03:22:59.202177 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-08 03:22:59.202188 | orchestrator | Sunday 08 February 2026 03:22:54 +0000 (0:00:00.654) 0:01:10.660 ******* 2026-02-08 03:22:59.202238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-08 03:22:59.202253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-08 03:22:59.202274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-08 03:22:59.202286 | orchestrator | 2026-02-08 03:22:59.202297 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-08 03:22:59.202309 | orchestrator | Sunday 08 February 2026 03:22:57 +0000 (0:00:02.893) 0:01:13.553 ******* 2026-02-08 03:22:59.202320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-08 03:22:59.202342 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:22:59.202354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-08 03:22:59.202365 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:22:59.202386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-08 03:23:06.775624 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:06.775725 | orchestrator | 2026-02-08 03:23:06.775748 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-08 03:23:06.775765 | orchestrator | Sunday 08 February 2026 03:22:59 +0000 (0:00:01.365) 0:01:14.919 ******* 2026-02-08 03:23:06.775785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 03:23:06.775839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 03:23:06.775854 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:06.775864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 03:23:06.775873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 03:23:06.775904 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:06.775913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 03:23:06.775922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 03:23:06.775931 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:06.775940 | orchestrator | 2026-02-08 03:23:06.775949 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-08 03:23:06.775958 | orchestrator | Sunday 08 February 2026 03:23:00 +0000 (0:00:01.755) 0:01:16.674 ******* 2026-02-08 03:23:06.775973 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:06.775996 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:06.776011 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:06.776025 | orchestrator | 2026-02-08 03:23:06.776039 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-08 03:23:06.776053 | orchestrator | Sunday 08 February 2026 03:23:01 +0000 (0:00:00.419) 0:01:17.094 ******* 2026-02-08 03:23:06.776067 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:06.776080 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:06.776098 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:06.776205 | orchestrator | 2026-02-08 03:23:06.776218 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-08 03:23:06.776229 | orchestrator | Sunday 08 February 2026 03:23:02 +0000 (0:00:01.337) 0:01:18.431 ******* 2026-02-08 03:23:06.776240 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:23:06.776251 | orchestrator | 2026-02-08 03:23:06.776262 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-08 03:23:06.776273 | orchestrator | Sunday 08 February 2026 03:23:03 +0000 (0:00:00.964) 0:01:19.395 ******* 2026-02-08 03:23:06.776303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 03:23:06.776325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:23:06.776348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 03:23:06.776360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 03:23:06.776372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 03:23:06.776390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 03:23:07.462718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:23:07.462845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 03:23:07.462892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:23:07.462916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 03:23:07.462937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 03:23:07.462981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 03:23:07.463002 | orchestrator | 2026-02-08 03:23:07.463023 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-08 03:23:07.463043 | orchestrator | Sunday 08 February 2026 03:23:06 +0000 (0:00:03.349) 0:01:22.744 ******* 2026-02-08 03:23:07.463071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 03:23:07.463174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:23:07.463206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 03:23:07.463226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 03:23:07.463245 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:07.463259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 03:23:07.463289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.054211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.054397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.054419 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:17.054436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 03:23:17.054479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.054492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.054578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.054594 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:17.054607 | orchestrator | 2026-02-08 03:23:17.054621 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-08 03:23:17.054637 | orchestrator | Sunday 08 February 2026 03:23:07 +0000 (0:00:00.699) 0:01:23.444 ******* 2026-02-08 03:23:17.054653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-08 03:23:17.054670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-08 03:23:17.054684 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:17.054698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-08 03:23:17.054711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-08 03:23:17.054725 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:17.054738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-08 03:23:17.054751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-08 03:23:17.054765 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:17.054777 | orchestrator | 2026-02-08 03:23:17.054790 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-08 03:23:17.054803 | orchestrator | Sunday 08 February 2026 03:23:08 +0000 (0:00:01.204) 0:01:24.649 ******* 2026-02-08 03:23:17.054816 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:23:17.054829 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:23:17.054842 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:23:17.054855 | orchestrator | 2026-02-08 03:23:17.054867 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-08 03:23:17.054881 | orchestrator | Sunday 08 February 2026 03:23:10 +0000 (0:00:01.315) 0:01:25.964 ******* 2026-02-08 03:23:17.054894 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:23:17.054906 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:23:17.054919 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:23:17.054933 | orchestrator | 2026-02-08 03:23:17.054943 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-08 03:23:17.054964 | orchestrator | Sunday 08 February 2026 03:23:12 +0000 (0:00:02.064) 0:01:28.028 ******* 2026-02-08 03:23:17.054975 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:17.054986 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:17.054997 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:17.055008 | orchestrator | 2026-02-08 03:23:17.055019 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-08 03:23:17.055030 | orchestrator | Sunday 08 February 2026 03:23:12 +0000 (0:00:00.315) 0:01:28.344 ******* 2026-02-08 03:23:17.055041 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:17.055052 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:17.055063 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:17.055074 | orchestrator | 2026-02-08 03:23:17.055085 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-08 03:23:17.055096 | orchestrator | Sunday 08 February 2026 03:23:12 +0000 (0:00:00.312) 0:01:28.656 ******* 2026-02-08 03:23:17.055154 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:23:17.055166 | orchestrator | 2026-02-08 03:23:17.055177 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-08 03:23:17.055188 | orchestrator | Sunday 08 February 2026 03:23:13 +0000 (0:00:00.984) 0:01:29.640 ******* 2026-02-08 03:23:17.055216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 03:23:17.337483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 03:23:17.337648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.337681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.337747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 03:23:17.337772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.337816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 03:23:17.337872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.337899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.337920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.337955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.337976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.337996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.338154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.970477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 03:23:17.970595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 03:23:17.970645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.970666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.970680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.970714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.970748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.970761 | orchestrator | 2026-02-08 03:23:17.970776 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-08 03:23:17.970788 | orchestrator | Sunday 08 February 2026 03:23:17 +0000 (0:00:03.569) 0:01:33.210 ******* 2026-02-08 03:23:17.970801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 03:23:17.970823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 03:23:17.970835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.970847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.970866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 03:23:17.970888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:23:18.437880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 03:23:18.437985 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:18.437997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 03:23:18.438004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 03:23:18.438012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 03:23:18.438061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 03:23:18.438463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 03:23:18.438494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:23:18.438509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 03:23:18.438516 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:18.438527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 03:23:18.438535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 03:23:18.438542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 03:23:18.438549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 03:23:18.438568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 03:23:28.787868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:23:28.787980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 03:23:28.787996 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:28.788009 | orchestrator | 2026-02-08 03:23:28.788020 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-08 03:23:28.788031 | orchestrator | Sunday 08 February 2026 03:23:18 +0000 (0:00:01.100) 0:01:34.310 ******* 2026-02-08 03:23:28.788046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-08 03:23:28.788066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-08 03:23:28.788084 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:28.788099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-08 03:23:28.788146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-08 03:23:28.788161 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:28.788177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-08 03:23:28.788197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-08 03:23:28.788212 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:28.788227 | orchestrator | 2026-02-08 03:23:28.788243 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-08 03:23:28.788259 | orchestrator | Sunday 08 February 2026 03:23:19 +0000 (0:00:01.335) 0:01:35.646 ******* 2026-02-08 03:23:28.788274 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:23:28.788318 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:23:28.788336 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:23:28.788352 | orchestrator | 2026-02-08 03:23:28.788369 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-08 03:23:28.788385 | orchestrator | Sunday 08 February 2026 03:23:21 +0000 (0:00:01.287) 0:01:36.934 ******* 2026-02-08 03:23:28.788402 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:23:28.788421 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:23:28.788436 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:23:28.788454 | orchestrator | 2026-02-08 03:23:28.788473 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-08 03:23:28.788490 | orchestrator | Sunday 08 February 2026 03:23:23 +0000 (0:00:02.011) 0:01:38.945 ******* 2026-02-08 03:23:28.788504 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:28.788516 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:28.788527 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:28.788538 | orchestrator | 2026-02-08 03:23:28.788549 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-08 03:23:28.788561 | orchestrator | Sunday 08 February 2026 03:23:23 +0000 (0:00:00.327) 0:01:39.273 ******* 2026-02-08 03:23:28.788572 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:23:28.788584 | orchestrator | 2026-02-08 03:23:28.788595 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-08 03:23:28.788606 | orchestrator | Sunday 08 February 2026 03:23:24 +0000 (0:00:01.135) 0:01:40.408 ******* 2026-02-08 03:23:28.788650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 03:23:28.788673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 03:23:28.788721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 03:23:31.869927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 03:23:31.870168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 03:23:31.870243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 03:23:31.870272 | orchestrator | 2026-02-08 03:23:31.870292 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-08 03:23:31.870310 | orchestrator | Sunday 08 February 2026 03:23:28 +0000 (0:00:04.375) 0:01:44.784 ******* 2026-02-08 03:23:31.870329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 03:23:31.870363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 03:23:36.389968 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:36.390097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 03:23:36.390143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 03:23:36.390153 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:36.390173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 03:23:36.390200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 03:23:36.390208 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:36.390215 | orchestrator | 2026-02-08 03:23:36.390222 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-08 03:23:36.390229 | orchestrator | Sunday 08 February 2026 03:23:31 +0000 (0:00:03.064) 0:01:47.848 ******* 2026-02-08 03:23:36.390236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 03:23:36.390255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 03:23:44.700451 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:44.700597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 03:23:44.700630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 03:23:44.700664 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:44.700685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 03:23:44.700704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 03:23:44.700722 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:44.700740 | orchestrator | 2026-02-08 03:23:44.700761 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-08 03:23:44.700804 | orchestrator | Sunday 08 February 2026 03:23:36 +0000 (0:00:04.413) 0:01:52.262 ******* 2026-02-08 03:23:44.700824 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:23:44.700847 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:23:44.700868 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:23:44.700888 | orchestrator | 2026-02-08 03:23:44.700901 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-08 03:23:44.700913 | orchestrator | Sunday 08 February 2026 03:23:37 +0000 (0:00:01.279) 0:01:53.542 ******* 2026-02-08 03:23:44.700924 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:23:44.700935 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:23:44.700970 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:23:44.700984 | orchestrator | 2026-02-08 03:23:44.700996 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-08 03:23:44.701009 | orchestrator | Sunday 08 February 2026 03:23:39 +0000 (0:00:02.061) 0:01:55.603 ******* 2026-02-08 03:23:44.701021 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:44.701034 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:44.701046 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:44.701059 | orchestrator | 2026-02-08 03:23:44.701072 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-08 03:23:44.701084 | orchestrator | Sunday 08 February 2026 03:23:40 +0000 (0:00:00.337) 0:01:55.940 ******* 2026-02-08 03:23:44.701097 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:23:44.701146 | orchestrator | 2026-02-08 03:23:44.701159 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-08 03:23:44.701187 | orchestrator | Sunday 08 February 2026 03:23:41 +0000 (0:00:01.085) 0:01:57.026 ******* 2026-02-08 03:23:44.701236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 03:23:44.701257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 03:23:44.701271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 03:23:44.701284 | orchestrator | 2026-02-08 03:23:44.701297 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-08 03:23:44.701310 | orchestrator | Sunday 08 February 2026 03:23:44 +0000 (0:00:02.943) 0:01:59.969 ******* 2026-02-08 03:23:44.701324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-08 03:23:44.701346 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:44.701359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-08 03:23:44.701370 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:44.701381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-08 03:23:44.701393 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:44.701404 | orchestrator | 2026-02-08 03:23:44.701415 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-08 03:23:44.701426 | orchestrator | Sunday 08 February 2026 03:23:44 +0000 (0:00:00.394) 0:02:00.364 ******* 2026-02-08 03:23:44.701438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-08 03:23:44.701458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-08 03:23:53.518749 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:53.518882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-08 03:23:53.518908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-08 03:23:53.518925 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:53.518935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-08 03:23:53.519016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-08 03:23:53.519032 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:53.519042 | orchestrator | 2026-02-08 03:23:53.519052 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-08 03:23:53.519062 | orchestrator | Sunday 08 February 2026 03:23:45 +0000 (0:00:00.886) 0:02:01.250 ******* 2026-02-08 03:23:53.519071 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:23:53.519080 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:23:53.519088 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:23:53.519097 | orchestrator | 2026-02-08 03:23:53.519177 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-08 03:23:53.519188 | orchestrator | Sunday 08 February 2026 03:23:46 +0000 (0:00:01.304) 0:02:02.555 ******* 2026-02-08 03:23:53.519197 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:23:53.519206 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:23:53.519214 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:23:53.519223 | orchestrator | 2026-02-08 03:23:53.519232 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-08 03:23:53.519241 | orchestrator | Sunday 08 February 2026 03:23:48 +0000 (0:00:02.087) 0:02:04.642 ******* 2026-02-08 03:23:53.519249 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:53.519258 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:53.519267 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:53.519276 | orchestrator | 2026-02-08 03:23:53.519285 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-08 03:23:53.519294 | orchestrator | Sunday 08 February 2026 03:23:49 +0000 (0:00:00.323) 0:02:04.966 ******* 2026-02-08 03:23:53.519302 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:23:53.519311 | orchestrator | 2026-02-08 03:23:53.519320 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-08 03:23:53.519333 | orchestrator | Sunday 08 February 2026 03:23:50 +0000 (0:00:01.116) 0:02:06.083 ******* 2026-02-08 03:23:53.519369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 03:23:53.519392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 03:23:53.519428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 03:23:55.225654 | orchestrator | 2026-02-08 03:23:55.225742 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-08 03:23:55.225775 | orchestrator | Sunday 08 February 2026 03:23:53 +0000 (0:00:03.309) 0:02:09.393 ******* 2026-02-08 03:23:55.225801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 03:23:55.225811 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:23:55.225836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 03:23:55.225851 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:23:55.225863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 03:23:55.225885 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:23:55.225892 | orchestrator | 2026-02-08 03:23:55.225900 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-08 03:23:55.225914 | orchestrator | Sunday 08 February 2026 03:23:54 +0000 (0:00:00.708) 0:02:10.101 ******* 2026-02-08 03:23:55.225923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-08 03:23:55.225934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 03:23:55.225943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-08 03:23:55.225961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 03:24:04.497389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-08 03:24:04.497502 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:04.497516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-08 03:24:04.497528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 03:24:04.497537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-08 03:24:04.497561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-08 03:24:04.497572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 03:24:04.497580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 03:24:04.497590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-08 03:24:04.497599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-08 03:24:04.497606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 03:24:04.497611 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:04.497615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-08 03:24:04.497638 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:04.497643 | orchestrator | 2026-02-08 03:24:04.497648 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-08 03:24:04.497654 | orchestrator | Sunday 08 February 2026 03:23:55 +0000 (0:00:00.996) 0:02:11.098 ******* 2026-02-08 03:24:04.497659 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:24:04.497664 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:24:04.497668 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:24:04.497672 | orchestrator | 2026-02-08 03:24:04.497677 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-08 03:24:04.497682 | orchestrator | Sunday 08 February 2026 03:23:56 +0000 (0:00:01.564) 0:02:12.663 ******* 2026-02-08 03:24:04.497686 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:24:04.497691 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:24:04.497695 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:24:04.497700 | orchestrator | 2026-02-08 03:24:04.497705 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-08 03:24:04.497710 | orchestrator | Sunday 08 February 2026 03:23:59 +0000 (0:00:02.311) 0:02:14.974 ******* 2026-02-08 03:24:04.497714 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:04.497719 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:04.497734 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:04.497739 | orchestrator | 2026-02-08 03:24:04.497744 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-08 03:24:04.497748 | orchestrator | Sunday 08 February 2026 03:23:59 +0000 (0:00:00.312) 0:02:15.287 ******* 2026-02-08 03:24:04.497753 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:04.497757 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:04.497762 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:04.497767 | orchestrator | 2026-02-08 03:24:04.497771 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-08 03:24:04.497776 | orchestrator | Sunday 08 February 2026 03:23:59 +0000 (0:00:00.325) 0:02:15.612 ******* 2026-02-08 03:24:04.497780 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:24:04.497785 | orchestrator | 2026-02-08 03:24:04.497789 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-08 03:24:04.497794 | orchestrator | Sunday 08 February 2026 03:24:01 +0000 (0:00:01.408) 0:02:17.021 ******* 2026-02-08 03:24:04.497805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 03:24:04.497813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 03:24:04.497823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 03:24:04.497830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 03:24:04.497839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 03:24:05.144820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 03:24:05.144947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 03:24:05.144985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 03:24:05.145012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 03:24:05.145031 | orchestrator | 2026-02-08 03:24:05.145041 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-08 03:24:05.145052 | orchestrator | Sunday 08 February 2026 03:24:04 +0000 (0:00:03.346) 0:02:20.368 ******* 2026-02-08 03:24:05.145078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 03:24:05.145089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 03:24:05.145149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 03:24:05.145162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 03:24:05.145178 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:05.145190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 03:24:05.145199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 03:24:05.145208 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:05.145225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 03:24:14.341831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 03:24:14.341978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 03:24:14.341988 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:14.341995 | orchestrator | 2026-02-08 03:24:14.342001 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-08 03:24:14.342007 | orchestrator | Sunday 08 February 2026 03:24:05 +0000 (0:00:00.642) 0:02:21.010 ******* 2026-02-08 03:24:14.342062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-08 03:24:14.342073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-08 03:24:14.342079 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:14.342084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-08 03:24:14.342088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-08 03:24:14.342093 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:14.342118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-08 03:24:14.342125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-08 03:24:14.342129 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:14.342134 | orchestrator | 2026-02-08 03:24:14.342138 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-08 03:24:14.342142 | orchestrator | Sunday 08 February 2026 03:24:06 +0000 (0:00:01.060) 0:02:22.071 ******* 2026-02-08 03:24:14.342169 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:24:14.342174 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:24:14.342178 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:24:14.342183 | orchestrator | 2026-02-08 03:24:14.342187 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-08 03:24:14.342191 | orchestrator | Sunday 08 February 2026 03:24:07 +0000 (0:00:01.292) 0:02:23.363 ******* 2026-02-08 03:24:14.342196 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:24:14.342200 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:24:14.342204 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:24:14.342209 | orchestrator | 2026-02-08 03:24:14.342213 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-08 03:24:14.342224 | orchestrator | Sunday 08 February 2026 03:24:09 +0000 (0:00:02.081) 0:02:25.445 ******* 2026-02-08 03:24:14.342229 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:14.342233 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:14.342237 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:14.342242 | orchestrator | 2026-02-08 03:24:14.342246 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-08 03:24:14.342269 | orchestrator | Sunday 08 February 2026 03:24:09 +0000 (0:00:00.357) 0:02:25.802 ******* 2026-02-08 03:24:14.342276 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:24:14.342283 | orchestrator | 2026-02-08 03:24:14.342290 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-08 03:24:14.342297 | orchestrator | Sunday 08 February 2026 03:24:11 +0000 (0:00:01.285) 0:02:27.088 ******* 2026-02-08 03:24:14.342312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 03:24:14.342323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:24:14.342332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 03:24:14.342341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 03:24:14.342361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:24:19.699570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:24:19.699683 | orchestrator | 2026-02-08 03:24:19.699699 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-08 03:24:19.699711 | orchestrator | Sunday 08 February 2026 03:24:14 +0000 (0:00:03.123) 0:02:30.211 ******* 2026-02-08 03:24:19.699724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 03:24:19.699736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:24:19.699746 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:19.699758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 03:24:19.699854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:24:19.699868 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:19.699878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 03:24:19.699889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:24:19.699899 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:19.699909 | orchestrator | 2026-02-08 03:24:19.699919 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-08 03:24:19.699929 | orchestrator | Sunday 08 February 2026 03:24:15 +0000 (0:00:00.696) 0:02:30.908 ******* 2026-02-08 03:24:19.699940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-08 03:24:19.699952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-08 03:24:19.699972 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:19.699982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-08 03:24:19.699992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-08 03:24:19.700002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-08 03:24:19.700011 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:19.700021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-08 03:24:19.700031 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:19.700041 | orchestrator | 2026-02-08 03:24:19.700051 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-08 03:24:19.700061 | orchestrator | Sunday 08 February 2026 03:24:15 +0000 (0:00:00.919) 0:02:31.827 ******* 2026-02-08 03:24:19.700070 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:24:19.700080 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:24:19.700089 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:24:19.700124 | orchestrator | 2026-02-08 03:24:19.700135 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-08 03:24:19.700144 | orchestrator | Sunday 08 February 2026 03:24:17 +0000 (0:00:01.607) 0:02:33.434 ******* 2026-02-08 03:24:19.700154 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:24:19.700164 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:24:19.700173 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:24:19.700183 | orchestrator | 2026-02-08 03:24:19.700197 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-08 03:24:19.700214 | orchestrator | Sunday 08 February 2026 03:24:19 +0000 (0:00:02.136) 0:02:35.570 ******* 2026-02-08 03:24:24.130704 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:24:24.130802 | orchestrator | 2026-02-08 03:24:24.130816 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-08 03:24:24.130827 | orchestrator | Sunday 08 February 2026 03:24:20 +0000 (0:00:01.121) 0:02:36.692 ******* 2026-02-08 03:24:24.130841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 03:24:24.130854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:24:24.130892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 03:24:24.130905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 03:24:24.130916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 03:24:24.130957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:24:24.130969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 03:24:24.130979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 03:24:24.130996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 03:24:24.131007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:24:24.131017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 03:24:24.131039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 03:24:25.172947 | orchestrator | 2026-02-08 03:24:25.173056 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-08 03:24:25.173075 | orchestrator | Sunday 08 February 2026 03:24:24 +0000 (0:00:03.394) 0:02:40.086 ******* 2026-02-08 03:24:25.173091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 03:24:25.173224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:24:25.173241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 03:24:25.173254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 03:24:25.173266 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:25.173294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 03:24:25.173328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:24:25.173341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 03:24:25.173362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 03:24:25.173373 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:25.173384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 03:24:25.173396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:24:25.173412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 03:24:25.173433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 03:24:36.201360 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:36.201491 | orchestrator | 2026-02-08 03:24:36.201575 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-08 03:24:36.201661 | orchestrator | Sunday 08 February 2026 03:24:25 +0000 (0:00:01.049) 0:02:41.136 ******* 2026-02-08 03:24:36.201682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-08 03:24:36.201703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-08 03:24:36.201724 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:36.201742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-08 03:24:36.201761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-08 03:24:36.201779 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:36.201797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-08 03:24:36.201815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-08 03:24:36.201902 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:36.201923 | orchestrator | 2026-02-08 03:24:36.201938 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-08 03:24:36.201951 | orchestrator | Sunday 08 February 2026 03:24:26 +0000 (0:00:00.904) 0:02:42.040 ******* 2026-02-08 03:24:36.201964 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:24:36.201976 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:24:36.201989 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:24:36.202001 | orchestrator | 2026-02-08 03:24:36.202138 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-08 03:24:36.202161 | orchestrator | Sunday 08 February 2026 03:24:27 +0000 (0:00:01.271) 0:02:43.312 ******* 2026-02-08 03:24:36.202182 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:24:36.202204 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:24:36.202220 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:24:36.202231 | orchestrator | 2026-02-08 03:24:36.202242 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-08 03:24:36.202253 | orchestrator | Sunday 08 February 2026 03:24:29 +0000 (0:00:02.048) 0:02:45.361 ******* 2026-02-08 03:24:36.202263 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:24:36.202274 | orchestrator | 2026-02-08 03:24:36.202285 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-08 03:24:36.202296 | orchestrator | Sunday 08 February 2026 03:24:30 +0000 (0:00:01.414) 0:02:46.775 ******* 2026-02-08 03:24:36.202307 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 03:24:36.202318 | orchestrator | 2026-02-08 03:24:36.202329 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-08 03:24:36.202340 | orchestrator | Sunday 08 February 2026 03:24:33 +0000 (0:00:03.008) 0:02:49.784 ******* 2026-02-08 03:24:36.202392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:24:36.202426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 03:24:36.202439 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:36.202452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:24:36.202480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 03:24:36.202509 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:36.202547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:24:38.582585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 03:24:38.582670 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:38.582682 | orchestrator | 2026-02-08 03:24:38.582690 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-08 03:24:38.582698 | orchestrator | Sunday 08 February 2026 03:24:36 +0000 (0:00:02.284) 0:02:52.069 ******* 2026-02-08 03:24:38.582723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:24:38.582751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 03:24:38.582759 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:38.582782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:24:38.582790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 03:24:38.582818 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:38.582836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:24:38.582856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 03:24:48.487909 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:48.488034 | orchestrator | 2026-02-08 03:24:48.488067 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-08 03:24:48.488196 | orchestrator | Sunday 08 February 2026 03:24:38 +0000 (0:00:02.383) 0:02:54.453 ******* 2026-02-08 03:24:48.488216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 03:24:48.488235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 03:24:48.488279 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:48.488311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 03:24:48.488327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 03:24:48.488341 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:48.488356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 03:24:48.488370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 03:24:48.488384 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:48.488398 | orchestrator | 2026-02-08 03:24:48.488413 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-08 03:24:48.488428 | orchestrator | Sunday 08 February 2026 03:24:41 +0000 (0:00:02.998) 0:02:57.451 ******* 2026-02-08 03:24:48.488443 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:24:48.488477 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:24:48.488493 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:24:48.488508 | orchestrator | 2026-02-08 03:24:48.488523 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-08 03:24:48.488539 | orchestrator | Sunday 08 February 2026 03:24:43 +0000 (0:00:02.042) 0:02:59.494 ******* 2026-02-08 03:24:48.488553 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:48.488569 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:48.488584 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:48.488598 | orchestrator | 2026-02-08 03:24:48.488613 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-08 03:24:48.488637 | orchestrator | Sunday 08 February 2026 03:24:45 +0000 (0:00:01.498) 0:03:00.993 ******* 2026-02-08 03:24:48.488653 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:48.488667 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:48.488682 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:48.488697 | orchestrator | 2026-02-08 03:24:48.488712 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-08 03:24:48.488727 | orchestrator | Sunday 08 February 2026 03:24:45 +0000 (0:00:00.348) 0:03:01.341 ******* 2026-02-08 03:24:48.488741 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:24:48.488753 | orchestrator | 2026-02-08 03:24:48.488766 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-08 03:24:48.488779 | orchestrator | Sunday 08 February 2026 03:24:46 +0000 (0:00:01.385) 0:03:02.726 ******* 2026-02-08 03:24:48.488793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-08 03:24:48.488816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-08 03:24:48.488830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-08 03:24:48.488842 | orchestrator | 2026-02-08 03:24:48.488855 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-08 03:24:48.488868 | orchestrator | Sunday 08 February 2026 03:24:48 +0000 (0:00:01.433) 0:03:04.160 ******* 2026-02-08 03:24:48.488890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-08 03:24:56.924701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-08 03:24:56.924825 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:56.924845 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:56.924858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-08 03:24:56.924870 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:56.924881 | orchestrator | 2026-02-08 03:24:56.924894 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-08 03:24:56.924907 | orchestrator | Sunday 08 February 2026 03:24:48 +0000 (0:00:00.403) 0:03:04.563 ******* 2026-02-08 03:24:56.924919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-08 03:24:56.924933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-08 03:24:56.924945 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:56.924956 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:56.924988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-08 03:24:56.925069 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:56.925082 | orchestrator | 2026-02-08 03:24:56.925125 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-08 03:24:56.925139 | orchestrator | Sunday 08 February 2026 03:24:49 +0000 (0:00:00.891) 0:03:05.454 ******* 2026-02-08 03:24:56.925150 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:56.925160 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:56.925172 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:56.925183 | orchestrator | 2026-02-08 03:24:56.925195 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-08 03:24:56.925238 | orchestrator | Sunday 08 February 2026 03:24:50 +0000 (0:00:00.482) 0:03:05.936 ******* 2026-02-08 03:24:56.925251 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:56.925264 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:56.925272 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:56.925280 | orchestrator | 2026-02-08 03:24:56.925288 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-08 03:24:56.925297 | orchestrator | Sunday 08 February 2026 03:24:51 +0000 (0:00:01.280) 0:03:07.217 ******* 2026-02-08 03:24:56.925304 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:56.925312 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:56.925320 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:24:56.925330 | orchestrator | 2026-02-08 03:24:56.925342 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-08 03:24:56.925353 | orchestrator | Sunday 08 February 2026 03:24:51 +0000 (0:00:00.344) 0:03:07.562 ******* 2026-02-08 03:24:56.925365 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:24:56.925376 | orchestrator | 2026-02-08 03:24:56.925388 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-08 03:24:56.925399 | orchestrator | Sunday 08 February 2026 03:24:53 +0000 (0:00:01.470) 0:03:09.033 ******* 2026-02-08 03:24:56.925473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 03:24:56.925491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:56.925509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:56.925523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:56.925544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-08 03:24:56.925567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.156715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:57.156802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:57.156831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.156841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:57.156869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.156878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-08 03:24:57.156902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:57.156911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.156925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 03:24:57.156942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:57.156951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 03:24:57.156961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.156977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 03:24:57.308381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.308470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.308477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.308482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.308486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-08 03:24:57.308504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.308509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.308518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-08 03:24:57.308522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.308527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:57.308531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:57.308535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:57.308546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:57.416308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.416392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.416400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:57.416409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.416415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:57.416440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-08 03:24:57.416467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.416476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:57.416483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-08 03:24:57.416490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:57.416497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:57.416504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 03:24:57.416526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.757956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:58.758076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 03:24:58.758118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:58.758127 | orchestrator | 2026-02-08 03:24:58.758135 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-08 03:24:58.758143 | orchestrator | Sunday 08 February 2026 03:24:57 +0000 (0:00:04.367) 0:03:13.401 ******* 2026-02-08 03:24:58.758152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 03:24:58.758206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.758217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.758225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.758232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-08 03:24:58.758240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.758253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:58.758272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 03:24:58.849731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:58.849824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.849844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.849858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.849915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:58.849953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.849970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.849984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-08 03:24:58.850000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-08 03:24:58.850067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.850177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:58.850199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.850227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:58.949330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 03:24:58.949452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:58.949479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:58.949536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.949578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:58.949601 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:24:58.949649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.949669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-08 03:24:58.949689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 03:24:58.949722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:58.949743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.949764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:58.949795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:59.217526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:59.217631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 03:24:59.217717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-08 03:24:59.217747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:59.217760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:59.217773 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:24:59.217808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:59.217823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:59.217843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:59.217856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:24:59.217872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 03:24:59.217885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-08 03:24:59.217896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-08 03:24:59.217916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 03:25:10.264602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 03:25:10.264701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 03:25:10.264713 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:10.264723 | orchestrator | 2026-02-08 03:25:10.264733 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-08 03:25:10.264742 | orchestrator | Sunday 08 February 2026 03:24:59 +0000 (0:00:01.685) 0:03:15.086 ******* 2026-02-08 03:25:10.264751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-08 03:25:10.264775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-08 03:25:10.264785 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:10.264794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-08 03:25:10.264806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-08 03:25:10.264818 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:10.264830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-08 03:25:10.264842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-08 03:25:10.264853 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:10.264864 | orchestrator | 2026-02-08 03:25:10.264876 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-08 03:25:10.264888 | orchestrator | Sunday 08 February 2026 03:25:01 +0000 (0:00:01.975) 0:03:17.062 ******* 2026-02-08 03:25:10.264898 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:25:10.264910 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:25:10.264921 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:25:10.264933 | orchestrator | 2026-02-08 03:25:10.264981 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-08 03:25:10.265018 | orchestrator | Sunday 08 February 2026 03:25:02 +0000 (0:00:01.322) 0:03:18.384 ******* 2026-02-08 03:25:10.265030 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:25:10.265042 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:25:10.265054 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:25:10.265066 | orchestrator | 2026-02-08 03:25:10.265080 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-08 03:25:10.265119 | orchestrator | Sunday 08 February 2026 03:25:04 +0000 (0:00:02.325) 0:03:20.710 ******* 2026-02-08 03:25:10.265131 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:25:10.265143 | orchestrator | 2026-02-08 03:25:10.265156 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-08 03:25:10.265180 | orchestrator | Sunday 08 February 2026 03:25:06 +0000 (0:00:01.356) 0:03:22.067 ******* 2026-02-08 03:25:10.265191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 03:25:10.265203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 03:25:10.265226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 03:25:10.265235 | orchestrator | 2026-02-08 03:25:10.265244 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-08 03:25:10.265253 | orchestrator | Sunday 08 February 2026 03:25:09 +0000 (0:00:03.490) 0:03:25.557 ******* 2026-02-08 03:25:10.265269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 03:25:10.265278 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:10.265293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 03:25:20.565319 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:20.565512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 03:25:20.565535 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:20.565546 | orchestrator | 2026-02-08 03:25:20.565558 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-08 03:25:20.565570 | orchestrator | Sunday 08 February 2026 03:25:10 +0000 (0:00:00.576) 0:03:26.134 ******* 2026-02-08 03:25:20.565616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-08 03:25:20.565642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-08 03:25:20.565663 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:20.565682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-08 03:25:20.565735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-08 03:25:20.565759 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:20.565779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-08 03:25:20.565792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-08 03:25:20.565804 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:20.565815 | orchestrator | 2026-02-08 03:25:20.565826 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-08 03:25:20.565838 | orchestrator | Sunday 08 February 2026 03:25:11 +0000 (0:00:00.834) 0:03:26.968 ******* 2026-02-08 03:25:20.565849 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:25:20.565861 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:25:20.565872 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:25:20.565883 | orchestrator | 2026-02-08 03:25:20.565895 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-08 03:25:20.565907 | orchestrator | Sunday 08 February 2026 03:25:12 +0000 (0:00:01.881) 0:03:28.850 ******* 2026-02-08 03:25:20.565917 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:25:20.565928 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:25:20.565940 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:25:20.565951 | orchestrator | 2026-02-08 03:25:20.565962 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-08 03:25:20.565974 | orchestrator | Sunday 08 February 2026 03:25:14 +0000 (0:00:01.809) 0:03:30.660 ******* 2026-02-08 03:25:20.565984 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:25:20.565994 | orchestrator | 2026-02-08 03:25:20.566003 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-08 03:25:20.566079 | orchestrator | Sunday 08 February 2026 03:25:16 +0000 (0:00:01.583) 0:03:32.244 ******* 2026-02-08 03:25:20.566142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 03:25:20.566166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:25:20.566187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:25:20.566199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 03:25:20.566211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:25:20.566230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:25:21.859282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 03:25:21.859404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:25:21.859417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:25:21.859428 | orchestrator | 2026-02-08 03:25:21.859439 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-08 03:25:21.859449 | orchestrator | Sunday 08 February 2026 03:25:20 +0000 (0:00:04.190) 0:03:36.434 ******* 2026-02-08 03:25:21.859460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 03:25:21.859488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:25:21.859504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:25:21.859520 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:21.859531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 03:25:21.859540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:25:21.859550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:25:21.859559 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:21.859575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 03:25:34.514856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 03:25:34.514976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 03:25:34.514995 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:34.515010 | orchestrator | 2026-02-08 03:25:34.515023 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-08 03:25:34.515035 | orchestrator | Sunday 08 February 2026 03:25:21 +0000 (0:00:01.288) 0:03:37.722 ******* 2026-02-08 03:25:34.515048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515157 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:34.515169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515237 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:34.515249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-08 03:25:34.515312 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:34.515411 | orchestrator | 2026-02-08 03:25:34.515425 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-08 03:25:34.515438 | orchestrator | Sunday 08 February 2026 03:25:22 +0000 (0:00:00.946) 0:03:38.669 ******* 2026-02-08 03:25:34.515460 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:25:34.515479 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:25:34.515497 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:25:34.515522 | orchestrator | 2026-02-08 03:25:34.515550 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-08 03:25:34.515568 | orchestrator | Sunday 08 February 2026 03:25:24 +0000 (0:00:01.414) 0:03:40.083 ******* 2026-02-08 03:25:34.515587 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:25:34.515605 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:25:34.515621 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:25:34.515639 | orchestrator | 2026-02-08 03:25:34.515657 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-08 03:25:34.515676 | orchestrator | Sunday 08 February 2026 03:25:26 +0000 (0:00:02.210) 0:03:42.294 ******* 2026-02-08 03:25:34.515695 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:25:34.515745 | orchestrator | 2026-02-08 03:25:34.515764 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-08 03:25:34.515782 | orchestrator | Sunday 08 February 2026 03:25:28 +0000 (0:00:01.684) 0:03:43.978 ******* 2026-02-08 03:25:34.515800 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-08 03:25:34.515820 | orchestrator | 2026-02-08 03:25:34.515838 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-08 03:25:34.515856 | orchestrator | Sunday 08 February 2026 03:25:28 +0000 (0:00:00.897) 0:03:44.876 ******* 2026-02-08 03:25:34.515877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-08 03:25:34.515902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-08 03:25:34.515929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-08 03:25:34.515941 | orchestrator | 2026-02-08 03:25:34.515953 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-08 03:25:34.515965 | orchestrator | Sunday 08 February 2026 03:25:33 +0000 (0:00:04.086) 0:03:48.962 ******* 2026-02-08 03:25:34.515976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 03:25:34.515988 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:34.516014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 03:25:52.978576 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:52.978777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 03:25:52.978814 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:52.978833 | orchestrator | 2026-02-08 03:25:52.978851 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-08 03:25:52.978869 | orchestrator | Sunday 08 February 2026 03:25:34 +0000 (0:00:01.419) 0:03:50.381 ******* 2026-02-08 03:25:52.978888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 03:25:52.978910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 03:25:52.978927 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:52.978944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 03:25:52.978962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 03:25:52.979015 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:52.979031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 03:25:52.979048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 03:25:52.979059 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:52.979071 | orchestrator | 2026-02-08 03:25:52.979118 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-08 03:25:52.979138 | orchestrator | Sunday 08 February 2026 03:25:36 +0000 (0:00:01.518) 0:03:51.900 ******* 2026-02-08 03:25:52.979154 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:25:52.979169 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:25:52.979185 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:25:52.979200 | orchestrator | 2026-02-08 03:25:52.979216 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-08 03:25:52.979232 | orchestrator | Sunday 08 February 2026 03:25:38 +0000 (0:00:02.418) 0:03:54.319 ******* 2026-02-08 03:25:52.979248 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:25:52.979264 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:25:52.979281 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:25:52.979298 | orchestrator | 2026-02-08 03:25:52.979315 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-08 03:25:52.979332 | orchestrator | Sunday 08 February 2026 03:25:41 +0000 (0:00:02.824) 0:03:57.143 ******* 2026-02-08 03:25:52.979350 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-08 03:25:52.979363 | orchestrator | 2026-02-08 03:25:52.979373 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-08 03:25:52.979383 | orchestrator | Sunday 08 February 2026 03:25:42 +0000 (0:00:01.072) 0:03:58.216 ******* 2026-02-08 03:25:52.979396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 03:25:52.979407 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:52.979452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 03:25:52.979466 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:52.979484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 03:25:52.979515 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:52.979531 | orchestrator | 2026-02-08 03:25:52.979547 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-08 03:25:52.979562 | orchestrator | Sunday 08 February 2026 03:25:43 +0000 (0:00:01.054) 0:03:59.270 ******* 2026-02-08 03:25:52.979577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 03:25:52.979593 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:52.979609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 03:25:52.979625 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:52.979641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 03:25:52.979657 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:52.979672 | orchestrator | 2026-02-08 03:25:52.979686 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-08 03:25:52.979701 | orchestrator | Sunday 08 February 2026 03:25:44 +0000 (0:00:01.369) 0:04:00.639 ******* 2026-02-08 03:25:52.979716 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:25:52.979730 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:25:52.979745 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:25:52.979759 | orchestrator | 2026-02-08 03:25:52.979773 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-08 03:25:52.979791 | orchestrator | Sunday 08 February 2026 03:25:46 +0000 (0:00:01.573) 0:04:02.213 ******* 2026-02-08 03:25:52.979807 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:25:52.979825 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:25:52.979842 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:25:52.979858 | orchestrator | 2026-02-08 03:25:52.979874 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-08 03:25:52.979889 | orchestrator | Sunday 08 February 2026 03:25:49 +0000 (0:00:02.827) 0:04:05.040 ******* 2026-02-08 03:25:52.979905 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:25:52.979919 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:25:52.979936 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:25:52.979951 | orchestrator | 2026-02-08 03:25:52.979967 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-08 03:25:52.979983 | orchestrator | Sunday 08 February 2026 03:25:51 +0000 (0:00:02.608) 0:04:07.649 ******* 2026-02-08 03:25:52.980000 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-08 03:25:52.980017 | orchestrator | 2026-02-08 03:25:52.980060 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-08 03:26:08.222788 | orchestrator | Sunday 08 February 2026 03:25:52 +0000 (0:00:01.188) 0:04:08.837 ******* 2026-02-08 03:26:08.222903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 03:26:08.222915 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:08.222922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 03:26:08.222929 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:08.222935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 03:26:08.222941 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:08.222948 | orchestrator | 2026-02-08 03:26:08.222956 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-08 03:26:08.222963 | orchestrator | Sunday 08 February 2026 03:25:54 +0000 (0:00:01.296) 0:04:10.134 ******* 2026-02-08 03:26:08.222970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 03:26:08.222976 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:08.222982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 03:26:08.222988 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:08.222994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 03:26:08.223020 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:08.223028 | orchestrator | 2026-02-08 03:26:08.223035 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-08 03:26:08.223041 | orchestrator | Sunday 08 February 2026 03:25:55 +0000 (0:00:01.345) 0:04:11.479 ******* 2026-02-08 03:26:08.223047 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:08.223054 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:08.223061 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:08.223067 | orchestrator | 2026-02-08 03:26:08.223074 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-08 03:26:08.223123 | orchestrator | Sunday 08 February 2026 03:25:57 +0000 (0:00:01.925) 0:04:13.405 ******* 2026-02-08 03:26:08.223130 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:26:08.223137 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:26:08.223143 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:26:08.223148 | orchestrator | 2026-02-08 03:26:08.223158 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-08 03:26:08.223164 | orchestrator | Sunday 08 February 2026 03:26:00 +0000 (0:00:02.488) 0:04:15.894 ******* 2026-02-08 03:26:08.223170 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:26:08.223176 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:26:08.223182 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:26:08.223188 | orchestrator | 2026-02-08 03:26:08.223193 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-08 03:26:08.223200 | orchestrator | Sunday 08 February 2026 03:26:03 +0000 (0:00:03.173) 0:04:19.067 ******* 2026-02-08 03:26:08.223206 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:26:08.223212 | orchestrator | 2026-02-08 03:26:08.223218 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-08 03:26:08.223225 | orchestrator | Sunday 08 February 2026 03:26:04 +0000 (0:00:01.678) 0:04:20.746 ******* 2026-02-08 03:26:08.223232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 03:26:08.223239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 03:26:08.223247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 03:26:08.223261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 03:26:08.223273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:26:09.148021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 03:26:09.148217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 03:26:09.148235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.148248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.148291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:26:09.148326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 03:26:09.148339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 03:26:09.148351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.148362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.148373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:26:09.148393 | orchestrator | 2026-02-08 03:26:09.148406 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-08 03:26:09.148418 | orchestrator | Sunday 08 February 2026 03:26:08 +0000 (0:00:03.486) 0:04:24.232 ******* 2026-02-08 03:26:09.148434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 03:26:09.148510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 03:26:09.297113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 03:26:09.297246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 03:26:09.297263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.297305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.297316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.297328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.297373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:26:09.297387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:26:09.297398 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:09.297410 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:09.297421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 03:26:09.297440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 03:26:09.297451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.297461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 03:26:09.297476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 03:26:20.964813 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:20.964936 | orchestrator | 2026-02-08 03:26:20.964957 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-08 03:26:20.964974 | orchestrator | Sunday 08 February 2026 03:26:09 +0000 (0:00:00.934) 0:04:25.167 ******* 2026-02-08 03:26:20.964989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 03:26:20.965006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 03:26:20.965021 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:20.965034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 03:26:20.965046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 03:26:20.965113 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:20.965131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 03:26:20.965147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 03:26:20.965162 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:20.965177 | orchestrator | 2026-02-08 03:26:20.965191 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-08 03:26:20.965207 | orchestrator | Sunday 08 February 2026 03:26:10 +0000 (0:00:00.966) 0:04:26.134 ******* 2026-02-08 03:26:20.965222 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:26:20.965237 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:26:20.965251 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:26:20.965265 | orchestrator | 2026-02-08 03:26:20.965280 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-08 03:26:20.965294 | orchestrator | Sunday 08 February 2026 03:26:12 +0000 (0:00:01.841) 0:04:27.975 ******* 2026-02-08 03:26:20.965309 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:26:20.965323 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:26:20.965337 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:26:20.965349 | orchestrator | 2026-02-08 03:26:20.965363 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-08 03:26:20.965377 | orchestrator | Sunday 08 February 2026 03:26:14 +0000 (0:00:02.120) 0:04:30.096 ******* 2026-02-08 03:26:20.965392 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:26:20.965408 | orchestrator | 2026-02-08 03:26:20.965423 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-08 03:26:20.965438 | orchestrator | Sunday 08 February 2026 03:26:15 +0000 (0:00:01.366) 0:04:31.462 ******* 2026-02-08 03:26:20.965457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:26:20.965510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:26:20.965539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:26:20.965556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:26:20.965573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:26:20.965605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:26:22.977209 | orchestrator | 2026-02-08 03:26:22.977280 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-08 03:26:22.977289 | orchestrator | Sunday 08 February 2026 03:26:20 +0000 (0:00:05.365) 0:04:36.827 ******* 2026-02-08 03:26:22.977297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-08 03:26:22.977306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-08 03:26:22.977313 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:22.977320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-08 03:26:22.977341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-08 03:26:22.977373 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:22.977380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-08 03:26:22.977386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-08 03:26:22.977391 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:22.977396 | orchestrator | 2026-02-08 03:26:22.977402 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-08 03:26:22.977407 | orchestrator | Sunday 08 February 2026 03:26:21 +0000 (0:00:01.054) 0:04:37.881 ******* 2026-02-08 03:26:22.977413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-08 03:26:22.977420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-08 03:26:22.977428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-08 03:26:22.977435 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:22.977440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-08 03:26:22.977445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-08 03:26:22.977453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-08 03:26:22.977463 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:22.977469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-08 03:26:22.977474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-08 03:26:22.977488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-08 03:26:29.408893 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:29.408992 | orchestrator | 2026-02-08 03:26:29.409011 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-08 03:26:29.409019 | orchestrator | Sunday 08 February 2026 03:26:22 +0000 (0:00:00.965) 0:04:38.847 ******* 2026-02-08 03:26:29.409024 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:29.409030 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:29.409035 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:29.409041 | orchestrator | 2026-02-08 03:26:29.409046 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-08 03:26:29.409052 | orchestrator | Sunday 08 February 2026 03:26:23 +0000 (0:00:00.469) 0:04:39.316 ******* 2026-02-08 03:26:29.409060 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:29.409069 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:29.409077 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:29.409123 | orchestrator | 2026-02-08 03:26:29.409132 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-08 03:26:29.409140 | orchestrator | Sunday 08 February 2026 03:26:24 +0000 (0:00:01.484) 0:04:40.801 ******* 2026-02-08 03:26:29.409149 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:26:29.409158 | orchestrator | 2026-02-08 03:26:29.409166 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-08 03:26:29.409175 | orchestrator | Sunday 08 February 2026 03:26:26 +0000 (0:00:01.774) 0:04:42.575 ******* 2026-02-08 03:26:29.409187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-08 03:26:29.409199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 03:26:29.409209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:29.409258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:29.409268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 03:26:29.409293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-08 03:26:29.409302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 03:26:29.409311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:29.409320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:29.409328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 03:26:29.409349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-08 03:26:29.409358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 03:26:29.409375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.023145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.023244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 03:26:31.023265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-08 03:26:31.023340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-08 03:26:31.023388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.023424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.023469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 03:26:31.023487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-08 03:26:31.023512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-08 03:26:31.023523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.023541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.023552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 03:26:31.023574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-08 03:26:31.716040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-08 03:26:31.716315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.716351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.716384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 03:26:31.716397 | orchestrator | 2026-02-08 03:26:31.716411 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-08 03:26:31.716423 | orchestrator | Sunday 08 February 2026 03:26:31 +0000 (0:00:04.467) 0:04:47.043 ******* 2026-02-08 03:26:31.716436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-08 03:26:31.716450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 03:26:31.716482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.716504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.716516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 03:26:31.716536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-08 03:26:31.716550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-08 03:26:31.716570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-08 03:26:31.860907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.861047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 03:26:31.861066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.861078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.861208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 03:26:31.861222 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:31.861236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.861249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 03:26:31.861283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-08 03:26:31.861308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-08 03:26:31.861320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.861337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-08 03:26:31.861350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:31.861362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 03:26:31.861387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 03:26:33.438383 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:33.438512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:33.438547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:33.438568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 03:26:33.438602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-08 03:26:33.438618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-08 03:26:33.438656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:33.438692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 03:26:33.438704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 03:26:33.438716 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:33.438727 | orchestrator | 2026-02-08 03:26:33.438739 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-08 03:26:33.438751 | orchestrator | Sunday 08 February 2026 03:26:32 +0000 (0:00:00.848) 0:04:47.891 ******* 2026-02-08 03:26:33.438763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-08 03:26:33.438778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-08 03:26:33.438798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-08 03:26:33.438813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-08 03:26:33.438826 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:33.438837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-08 03:26:33.438848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-08 03:26:33.438861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-08 03:26:33.438883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-08 03:26:33.438896 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:33.438909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-08 03:26:33.438922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-08 03:26:33.438936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-08 03:26:33.438957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-08 03:26:41.222929 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:41.223051 | orchestrator | 2026-02-08 03:26:41.223073 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-08 03:26:41.223125 | orchestrator | Sunday 08 February 2026 03:26:33 +0000 (0:00:01.409) 0:04:49.300 ******* 2026-02-08 03:26:41.223146 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:41.223165 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:41.223184 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:41.223203 | orchestrator | 2026-02-08 03:26:41.223222 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-08 03:26:41.223240 | orchestrator | Sunday 08 February 2026 03:26:33 +0000 (0:00:00.436) 0:04:49.736 ******* 2026-02-08 03:26:41.223258 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:41.223277 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:41.223296 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:41.223314 | orchestrator | 2026-02-08 03:26:41.223334 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-08 03:26:41.223352 | orchestrator | Sunday 08 February 2026 03:26:35 +0000 (0:00:01.439) 0:04:51.176 ******* 2026-02-08 03:26:41.223370 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:26:41.223388 | orchestrator | 2026-02-08 03:26:41.223407 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-08 03:26:41.223427 | orchestrator | Sunday 08 February 2026 03:26:37 +0000 (0:00:01.774) 0:04:52.950 ******* 2026-02-08 03:26:41.223471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:26:41.223529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:26:41.223549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:26:41.223568 | orchestrator | 2026-02-08 03:26:41.223586 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-08 03:26:41.223625 | orchestrator | Sunday 08 February 2026 03:26:39 +0000 (0:00:02.191) 0:04:55.142 ******* 2026-02-08 03:26:41.223644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 03:26:41.223664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 03:26:41.223737 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:41.223761 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:41.223783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 03:26:41.223807 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:41.223827 | orchestrator | 2026-02-08 03:26:41.223845 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-08 03:26:41.223856 | orchestrator | Sunday 08 February 2026 03:26:39 +0000 (0:00:00.419) 0:04:55.561 ******* 2026-02-08 03:26:41.223868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-08 03:26:41.223881 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:41.223892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-08 03:26:41.223902 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:41.223913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-08 03:26:41.223924 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:41.223934 | orchestrator | 2026-02-08 03:26:41.223945 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-08 03:26:41.223956 | orchestrator | Sunday 08 February 2026 03:26:40 +0000 (0:00:00.981) 0:04:56.542 ******* 2026-02-08 03:26:41.223976 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:51.411813 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:51.411909 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:51.411927 | orchestrator | 2026-02-08 03:26:51.411943 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-08 03:26:51.411958 | orchestrator | Sunday 08 February 2026 03:26:41 +0000 (0:00:00.556) 0:04:57.099 ******* 2026-02-08 03:26:51.411972 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:51.411984 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:51.411996 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:51.412008 | orchestrator | 2026-02-08 03:26:51.412020 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-08 03:26:51.412031 | orchestrator | Sunday 08 February 2026 03:26:42 +0000 (0:00:01.469) 0:04:58.568 ******* 2026-02-08 03:26:51.412044 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:26:51.412056 | orchestrator | 2026-02-08 03:26:51.412070 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-08 03:26:51.412136 | orchestrator | Sunday 08 February 2026 03:26:44 +0000 (0:00:01.504) 0:05:00.073 ******* 2026-02-08 03:26:51.412169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 03:26:51.412188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 03:26:51.412200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 03:26:51.412231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 03:26:51.412245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 03:26:51.412272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 03:26:51.412286 | orchestrator | 2026-02-08 03:26:51.412298 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-08 03:26:51.412311 | orchestrator | Sunday 08 February 2026 03:26:50 +0000 (0:00:06.490) 0:05:06.564 ******* 2026-02-08 03:26:51.412324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-08 03:26:51.412346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-08 03:26:57.276342 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:57.276479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-08 03:26:57.276511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-08 03:26:57.276524 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:57.276536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-08 03:26:57.276546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-08 03:26:57.276556 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:57.276566 | orchestrator | 2026-02-08 03:26:57.276578 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-08 03:26:57.276600 | orchestrator | Sunday 08 February 2026 03:26:51 +0000 (0:00:00.721) 0:05:07.285 ******* 2026-02-08 03:26:57.276630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276709 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:57.276721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276743 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:57.276755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-08 03:26:57.276799 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:57.276809 | orchestrator | 2026-02-08 03:26:57.276820 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-08 03:26:57.276831 | orchestrator | Sunday 08 February 2026 03:26:52 +0000 (0:00:00.961) 0:05:08.247 ******* 2026-02-08 03:26:57.276842 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:26:57.276853 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:26:57.276864 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:26:57.276877 | orchestrator | 2026-02-08 03:26:57.276891 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-08 03:26:57.276903 | orchestrator | Sunday 08 February 2026 03:26:53 +0000 (0:00:01.295) 0:05:09.542 ******* 2026-02-08 03:26:57.276923 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:26:57.276936 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:26:57.276948 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:26:57.276961 | orchestrator | 2026-02-08 03:26:57.276974 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-08 03:26:57.276987 | orchestrator | Sunday 08 February 2026 03:26:55 +0000 (0:00:02.243) 0:05:11.786 ******* 2026-02-08 03:26:57.276999 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:57.277012 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:57.277026 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:57.277038 | orchestrator | 2026-02-08 03:26:57.277051 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-08 03:26:57.277063 | orchestrator | Sunday 08 February 2026 03:26:56 +0000 (0:00:00.699) 0:05:12.486 ******* 2026-02-08 03:26:57.277076 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:57.277119 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:26:57.277132 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:26:57.277144 | orchestrator | 2026-02-08 03:26:57.277157 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-08 03:26:57.277169 | orchestrator | Sunday 08 February 2026 03:26:56 +0000 (0:00:00.331) 0:05:12.817 ******* 2026-02-08 03:26:57.277181 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:26:57.277201 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.618717 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.618851 | orchestrator | 2026-02-08 03:27:42.618877 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-08 03:27:42.618896 | orchestrator | Sunday 08 February 2026 03:26:57 +0000 (0:00:00.336) 0:05:13.153 ******* 2026-02-08 03:27:42.618914 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:42.618931 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.618948 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.618964 | orchestrator | 2026-02-08 03:27:42.618981 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-08 03:27:42.618998 | orchestrator | Sunday 08 February 2026 03:26:57 +0000 (0:00:00.358) 0:05:13.512 ******* 2026-02-08 03:27:42.619014 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:42.619032 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.619050 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.619068 | orchestrator | 2026-02-08 03:27:42.619086 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-08 03:27:42.619103 | orchestrator | Sunday 08 February 2026 03:26:58 +0000 (0:00:00.707) 0:05:14.220 ******* 2026-02-08 03:27:42.619120 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:42.619138 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.619155 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.619200 | orchestrator | 2026-02-08 03:27:42.619219 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-08 03:27:42.619237 | orchestrator | Sunday 08 February 2026 03:26:58 +0000 (0:00:00.567) 0:05:14.787 ******* 2026-02-08 03:27:42.619255 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:42.619273 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:42.619291 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:42.619308 | orchestrator | 2026-02-08 03:27:42.619325 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-08 03:27:42.619363 | orchestrator | Sunday 08 February 2026 03:26:59 +0000 (0:00:00.662) 0:05:15.450 ******* 2026-02-08 03:27:42.619382 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:42.619400 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:42.619416 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:42.619431 | orchestrator | 2026-02-08 03:27:42.619447 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-08 03:27:42.619463 | orchestrator | Sunday 08 February 2026 03:27:00 +0000 (0:00:00.732) 0:05:16.183 ******* 2026-02-08 03:27:42.619481 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:42.619528 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:42.619546 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:42.619562 | orchestrator | 2026-02-08 03:27:42.619579 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-08 03:27:42.619596 | orchestrator | Sunday 08 February 2026 03:27:01 +0000 (0:00:00.876) 0:05:17.059 ******* 2026-02-08 03:27:42.619612 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:42.619628 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:42.619644 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:42.619660 | orchestrator | 2026-02-08 03:27:42.619677 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-08 03:27:42.619694 | orchestrator | Sunday 08 February 2026 03:27:02 +0000 (0:00:00.921) 0:05:17.981 ******* 2026-02-08 03:27:42.619710 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:42.619727 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:42.619743 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:42.619759 | orchestrator | 2026-02-08 03:27:42.619775 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-08 03:27:42.619792 | orchestrator | Sunday 08 February 2026 03:27:02 +0000 (0:00:00.870) 0:05:18.851 ******* 2026-02-08 03:27:42.619808 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:27:42.619824 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:27:42.619840 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:27:42.619856 | orchestrator | 2026-02-08 03:27:42.619872 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-08 03:27:42.619888 | orchestrator | Sunday 08 February 2026 03:27:12 +0000 (0:00:09.468) 0:05:28.319 ******* 2026-02-08 03:27:42.619904 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:42.619921 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:42.619938 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:42.619954 | orchestrator | 2026-02-08 03:27:42.619970 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-08 03:27:42.619986 | orchestrator | Sunday 08 February 2026 03:27:13 +0000 (0:00:01.224) 0:05:29.544 ******* 2026-02-08 03:27:42.620002 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:27:42.620019 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:27:42.620035 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:27:42.620051 | orchestrator | 2026-02-08 03:27:42.620067 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-08 03:27:42.620084 | orchestrator | Sunday 08 February 2026 03:27:24 +0000 (0:00:10.494) 0:05:40.038 ******* 2026-02-08 03:27:42.620100 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:42.620116 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:42.620133 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:42.620149 | orchestrator | 2026-02-08 03:27:42.620203 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-08 03:27:42.620221 | orchestrator | Sunday 08 February 2026 03:27:28 +0000 (0:00:04.759) 0:05:44.797 ******* 2026-02-08 03:27:42.620237 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:27:42.620254 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:27:42.620269 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:27:42.620284 | orchestrator | 2026-02-08 03:27:42.620300 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-08 03:27:42.620316 | orchestrator | Sunday 08 February 2026 03:27:33 +0000 (0:00:04.342) 0:05:49.139 ******* 2026-02-08 03:27:42.620330 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:42.620345 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.620361 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.620377 | orchestrator | 2026-02-08 03:27:42.620393 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-08 03:27:42.620409 | orchestrator | Sunday 08 February 2026 03:27:33 +0000 (0:00:00.737) 0:05:49.877 ******* 2026-02-08 03:27:42.620425 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:42.620442 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.620458 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.620494 | orchestrator | 2026-02-08 03:27:42.620539 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-08 03:27:42.620558 | orchestrator | Sunday 08 February 2026 03:27:34 +0000 (0:00:00.392) 0:05:50.270 ******* 2026-02-08 03:27:42.620575 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:42.620592 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.620607 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.620623 | orchestrator | 2026-02-08 03:27:42.620639 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-08 03:27:42.620655 | orchestrator | Sunday 08 February 2026 03:27:34 +0000 (0:00:00.346) 0:05:50.616 ******* 2026-02-08 03:27:42.620672 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:42.620688 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.620705 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.620721 | orchestrator | 2026-02-08 03:27:42.620737 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-08 03:27:42.620753 | orchestrator | Sunday 08 February 2026 03:27:35 +0000 (0:00:00.340) 0:05:50.957 ******* 2026-02-08 03:27:42.620769 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:42.620786 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.620802 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.620818 | orchestrator | 2026-02-08 03:27:42.620834 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-08 03:27:42.620852 | orchestrator | Sunday 08 February 2026 03:27:35 +0000 (0:00:00.721) 0:05:51.679 ******* 2026-02-08 03:27:42.620868 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:42.620884 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:42.620899 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:42.620908 | orchestrator | 2026-02-08 03:27:42.620918 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-08 03:27:42.620927 | orchestrator | Sunday 08 February 2026 03:27:36 +0000 (0:00:00.396) 0:05:52.076 ******* 2026-02-08 03:27:42.620937 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:42.620957 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:42.620970 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:42.620991 | orchestrator | 2026-02-08 03:27:42.621014 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-08 03:27:42.621030 | orchestrator | Sunday 08 February 2026 03:27:40 +0000 (0:00:04.729) 0:05:56.806 ******* 2026-02-08 03:27:42.621045 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:42.621061 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:42.621076 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:42.621093 | orchestrator | 2026-02-08 03:27:42.621110 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:27:42.621128 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-08 03:27:42.621145 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-08 03:27:42.621155 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-08 03:27:42.621218 | orchestrator | 2026-02-08 03:27:42.621230 | orchestrator | 2026-02-08 03:27:42.621240 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:27:42.621250 | orchestrator | Sunday 08 February 2026 03:27:41 +0000 (0:00:00.816) 0:05:57.622 ******* 2026-02-08 03:27:42.621260 | orchestrator | =============================================================================== 2026-02-08 03:27:42.621269 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.49s 2026-02-08 03:27:42.621279 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.47s 2026-02-08 03:27:42.621289 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.49s 2026-02-08 03:27:42.621310 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.37s 2026-02-08 03:27:42.621319 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.76s 2026-02-08 03:27:42.621329 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.73s 2026-02-08 03:27:42.621338 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.47s 2026-02-08 03:27:42.621348 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.41s 2026-02-08 03:27:42.621357 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.38s 2026-02-08 03:27:42.621367 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.37s 2026-02-08 03:27:42.621375 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.34s 2026-02-08 03:27:42.621382 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.19s 2026-02-08 03:27:42.621390 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.09s 2026-02-08 03:27:42.621398 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.57s 2026-02-08 03:27:42.621405 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.49s 2026-02-08 03:27:42.621413 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.49s 2026-02-08 03:27:42.621421 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.39s 2026-02-08 03:27:42.621429 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.35s 2026-02-08 03:27:42.621436 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.35s 2026-02-08 03:27:42.621445 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.33s 2026-02-08 03:27:45.059622 | orchestrator | 2026-02-08 03:27:45 | INFO  | Task c6fc9b6f-bc9a-4ffc-8336-a5f9b3c5be08 (opensearch) was prepared for execution. 2026-02-08 03:27:45.059703 | orchestrator | 2026-02-08 03:27:45 | INFO  | It takes a moment until task c6fc9b6f-bc9a-4ffc-8336-a5f9b3c5be08 (opensearch) has been started and output is visible here. 2026-02-08 03:27:55.906279 | orchestrator | 2026-02-08 03:27:55.906384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 03:27:55.906398 | orchestrator | 2026-02-08 03:27:55.906409 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 03:27:55.906419 | orchestrator | Sunday 08 February 2026 03:27:49 +0000 (0:00:00.325) 0:00:00.326 ******* 2026-02-08 03:27:55.906429 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:27:55.906440 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:27:55.906450 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:27:55.906459 | orchestrator | 2026-02-08 03:27:55.906469 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 03:27:55.906479 | orchestrator | Sunday 08 February 2026 03:27:49 +0000 (0:00:00.310) 0:00:00.636 ******* 2026-02-08 03:27:55.906489 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-08 03:27:55.906499 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-08 03:27:55.906508 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-08 03:27:55.906518 | orchestrator | 2026-02-08 03:27:55.906527 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-08 03:27:55.906537 | orchestrator | 2026-02-08 03:27:55.906546 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-08 03:27:55.906556 | orchestrator | Sunday 08 February 2026 03:27:50 +0000 (0:00:00.445) 0:00:01.082 ******* 2026-02-08 03:27:55.906566 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:27:55.906591 | orchestrator | 2026-02-08 03:27:55.906602 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-08 03:27:55.906611 | orchestrator | Sunday 08 February 2026 03:27:50 +0000 (0:00:00.526) 0:00:01.608 ******* 2026-02-08 03:27:55.906644 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-08 03:27:55.906654 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-08 03:27:55.906664 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-08 03:27:55.906673 | orchestrator | 2026-02-08 03:27:55.906683 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-08 03:27:55.906693 | orchestrator | Sunday 08 February 2026 03:27:51 +0000 (0:00:00.705) 0:00:02.313 ******* 2026-02-08 03:27:55.906707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:27:55.906721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:27:55.906748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:27:55.906770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:27:55.906792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:27:55.906806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:27:55.906818 | orchestrator | 2026-02-08 03:27:55.906830 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-08 03:27:55.906841 | orchestrator | Sunday 08 February 2026 03:27:53 +0000 (0:00:01.684) 0:00:03.998 ******* 2026-02-08 03:27:55.906852 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:27:55.906863 | orchestrator | 2026-02-08 03:27:55.906875 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-08 03:27:55.906885 | orchestrator | Sunday 08 February 2026 03:27:53 +0000 (0:00:00.525) 0:00:04.523 ******* 2026-02-08 03:27:55.906905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:27:56.740618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:27:56.740741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:27:56.740760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:27:56.740773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:27:56.740811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:27:56.740833 | orchestrator | 2026-02-08 03:27:56.740846 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-08 03:27:56.740858 | orchestrator | Sunday 08 February 2026 03:27:55 +0000 (0:00:02.310) 0:00:06.834 ******* 2026-02-08 03:27:56.740870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-08 03:27:56.740883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-08 03:27:56.740895 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:56.740908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-08 03:27:56.740933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-08 03:27:57.911728 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:57.911862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-08 03:27:57.911898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-08 03:27:57.911922 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:57.911942 | orchestrator | 2026-02-08 03:27:57.911963 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-08 03:27:57.911982 | orchestrator | Sunday 08 February 2026 03:27:56 +0000 (0:00:00.844) 0:00:07.678 ******* 2026-02-08 03:27:57.912003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-08 03:27:57.912082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-08 03:27:57.912173 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:27:57.912198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-08 03:27:57.912270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-08 03:27:57.912296 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:27:57.912316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-08 03:27:57.912361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-08 03:27:57.912382 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:27:57.912401 | orchestrator | 2026-02-08 03:27:57.912418 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-08 03:27:57.912452 | orchestrator | Sunday 08 February 2026 03:27:57 +0000 (0:00:01.152) 0:00:08.831 ******* 2026-02-08 03:28:06.012678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:28:06.012776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:28:06.012787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:28:06.012819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:28:06.012859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:28:06.012871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:28:06.012883 | orchestrator | 2026-02-08 03:28:06.012893 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-08 03:28:06.012903 | orchestrator | Sunday 08 February 2026 03:28:00 +0000 (0:00:02.296) 0:00:11.128 ******* 2026-02-08 03:28:06.012912 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:28:06.012923 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:28:06.012932 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:28:06.012940 | orchestrator | 2026-02-08 03:28:06.012949 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-08 03:28:06.012958 | orchestrator | Sunday 08 February 2026 03:28:02 +0000 (0:00:02.331) 0:00:13.459 ******* 2026-02-08 03:28:06.012976 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:28:06.012984 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:28:06.012993 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:28:06.013002 | orchestrator | 2026-02-08 03:28:06.013011 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-08 03:28:06.013019 | orchestrator | Sunday 08 February 2026 03:28:04 +0000 (0:00:01.809) 0:00:15.269 ******* 2026-02-08 03:28:06.013029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:28:06.013043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:28:06.013059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-08 03:30:42.885601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:30:42.885735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:30:42.885792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-08 03:30:42.885806 | orchestrator | 2026-02-08 03:30:42.885818 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-08 03:30:42.885829 | orchestrator | Sunday 08 February 2026 03:28:06 +0000 (0:00:01.680) 0:00:16.950 ******* 2026-02-08 03:30:42.885839 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:30:42.885850 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:30:42.885860 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:30:42.885869 | orchestrator | 2026-02-08 03:30:42.885879 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-08 03:30:42.885889 | orchestrator | Sunday 08 February 2026 03:28:06 +0000 (0:00:00.293) 0:00:17.243 ******* 2026-02-08 03:30:42.885898 | orchestrator | 2026-02-08 03:30:42.885908 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-08 03:30:42.885918 | orchestrator | Sunday 08 February 2026 03:28:06 +0000 (0:00:00.063) 0:00:17.306 ******* 2026-02-08 03:30:42.885928 | orchestrator | 2026-02-08 03:30:42.885937 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-08 03:30:42.885946 | orchestrator | Sunday 08 February 2026 03:28:06 +0000 (0:00:00.066) 0:00:17.372 ******* 2026-02-08 03:30:42.885957 | orchestrator | 2026-02-08 03:30:42.885976 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-08 03:30:42.886068 | orchestrator | Sunday 08 February 2026 03:28:06 +0000 (0:00:00.063) 0:00:17.436 ******* 2026-02-08 03:30:42.886090 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:30:42.886108 | orchestrator | 2026-02-08 03:30:42.886126 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-08 03:30:42.886145 | orchestrator | Sunday 08 February 2026 03:28:06 +0000 (0:00:00.223) 0:00:17.660 ******* 2026-02-08 03:30:42.886177 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:30:42.886197 | orchestrator | 2026-02-08 03:30:42.886212 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-08 03:30:42.886223 | orchestrator | Sunday 08 February 2026 03:28:07 +0000 (0:00:00.672) 0:00:18.333 ******* 2026-02-08 03:30:42.886235 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:30:42.886247 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:30:42.886259 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:30:42.886270 | orchestrator | 2026-02-08 03:30:42.886281 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-08 03:30:42.886292 | orchestrator | Sunday 08 February 2026 03:29:14 +0000 (0:01:07.409) 0:01:25.742 ******* 2026-02-08 03:30:42.886302 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:30:42.886313 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:30:42.886324 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:30:42.886335 | orchestrator | 2026-02-08 03:30:42.886346 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-08 03:30:42.886357 | orchestrator | Sunday 08 February 2026 03:30:32 +0000 (0:01:17.404) 0:02:43.147 ******* 2026-02-08 03:30:42.886369 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:30:42.886380 | orchestrator | 2026-02-08 03:30:42.886392 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-08 03:30:42.886403 | orchestrator | Sunday 08 February 2026 03:30:32 +0000 (0:00:00.536) 0:02:43.684 ******* 2026-02-08 03:30:42.886414 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:30:42.886425 | orchestrator | 2026-02-08 03:30:42.886436 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-08 03:30:42.886447 | orchestrator | Sunday 08 February 2026 03:30:35 +0000 (0:00:02.808) 0:02:46.492 ******* 2026-02-08 03:30:42.886458 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:30:42.886469 | orchestrator | 2026-02-08 03:30:42.886480 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-08 03:30:42.886491 | orchestrator | Sunday 08 February 2026 03:30:37 +0000 (0:00:02.130) 0:02:48.622 ******* 2026-02-08 03:30:42.886500 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:30:42.886510 | orchestrator | 2026-02-08 03:30:42.886520 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-08 03:30:42.886529 | orchestrator | Sunday 08 February 2026 03:30:40 +0000 (0:00:02.627) 0:02:51.249 ******* 2026-02-08 03:30:42.886539 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:30:42.886550 | orchestrator | 2026-02-08 03:30:42.886566 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:30:42.886583 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 03:30:42.886599 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 03:30:42.886632 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 03:30:42.886649 | orchestrator | 2026-02-08 03:30:42.886666 | orchestrator | 2026-02-08 03:30:42.886678 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:30:42.886688 | orchestrator | Sunday 08 February 2026 03:30:42 +0000 (0:00:02.552) 0:02:53.802 ******* 2026-02-08 03:30:42.886697 | orchestrator | =============================================================================== 2026-02-08 03:30:42.886707 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.40s 2026-02-08 03:30:42.886717 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.41s 2026-02-08 03:30:42.886726 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.81s 2026-02-08 03:30:42.886750 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.63s 2026-02-08 03:30:42.886788 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.55s 2026-02-08 03:30:42.886798 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.33s 2026-02-08 03:30:42.886808 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.31s 2026-02-08 03:30:42.886818 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.30s 2026-02-08 03:30:42.886827 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.13s 2026-02-08 03:30:42.886837 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.81s 2026-02-08 03:30:42.886846 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.68s 2026-02-08 03:30:42.886856 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.68s 2026-02-08 03:30:42.886865 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.15s 2026-02-08 03:30:42.886875 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.84s 2026-02-08 03:30:42.886884 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2026-02-08 03:30:42.886894 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.67s 2026-02-08 03:30:42.886913 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-02-08 03:30:43.253595 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-02-08 03:30:43.253695 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-02-08 03:30:43.253710 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-02-08 03:30:45.723809 | orchestrator | 2026-02-08 03:30:45 | INFO  | Task 55f12426-f486-4b52-9746-ca99f7ee44fc (memcached) was prepared for execution. 2026-02-08 03:30:45.723901 | orchestrator | 2026-02-08 03:30:45 | INFO  | It takes a moment until task 55f12426-f486-4b52-9746-ca99f7ee44fc (memcached) has been started and output is visible here. 2026-02-08 03:30:57.746703 | orchestrator | 2026-02-08 03:30:57.746874 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 03:30:57.746900 | orchestrator | 2026-02-08 03:30:57.746915 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 03:30:57.746930 | orchestrator | Sunday 08 February 2026 03:30:49 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-02-08 03:30:57.746945 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:30:57.746960 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:30:57.746973 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:30:57.746988 | orchestrator | 2026-02-08 03:30:57.747002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 03:30:57.747016 | orchestrator | Sunday 08 February 2026 03:30:50 +0000 (0:00:00.307) 0:00:00.569 ******* 2026-02-08 03:30:57.747031 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-08 03:30:57.747045 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-08 03:30:57.747058 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-08 03:30:57.747093 | orchestrator | 2026-02-08 03:30:57.747108 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-08 03:30:57.747123 | orchestrator | 2026-02-08 03:30:57.747138 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-08 03:30:57.747152 | orchestrator | Sunday 08 February 2026 03:30:50 +0000 (0:00:00.441) 0:00:01.011 ******* 2026-02-08 03:30:57.747167 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:30:57.747183 | orchestrator | 2026-02-08 03:30:57.747198 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-08 03:30:57.747213 | orchestrator | Sunday 08 February 2026 03:30:51 +0000 (0:00:00.478) 0:00:01.489 ******* 2026-02-08 03:30:57.747264 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-08 03:30:57.747281 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-08 03:30:57.747296 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-08 03:30:57.747312 | orchestrator | 2026-02-08 03:30:57.747327 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-08 03:30:57.747344 | orchestrator | Sunday 08 February 2026 03:30:51 +0000 (0:00:00.637) 0:00:02.127 ******* 2026-02-08 03:30:57.747369 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-08 03:30:57.747397 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-08 03:30:57.747424 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-08 03:30:57.747446 | orchestrator | 2026-02-08 03:30:57.747474 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-08 03:30:57.747500 | orchestrator | Sunday 08 February 2026 03:30:53 +0000 (0:00:01.716) 0:00:03.843 ******* 2026-02-08 03:30:57.747523 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:30:57.747543 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:30:57.747564 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:30:57.747586 | orchestrator | 2026-02-08 03:30:57.747610 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-08 03:30:57.747632 | orchestrator | Sunday 08 February 2026 03:30:55 +0000 (0:00:01.597) 0:00:05.441 ******* 2026-02-08 03:30:57.747652 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:30:57.747667 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:30:57.747681 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:30:57.747695 | orchestrator | 2026-02-08 03:30:57.747708 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:30:57.747743 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:30:57.747760 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:30:57.747773 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:30:57.747786 | orchestrator | 2026-02-08 03:30:57.747825 | orchestrator | 2026-02-08 03:30:57.747841 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:30:57.747855 | orchestrator | Sunday 08 February 2026 03:30:57 +0000 (0:00:02.110) 0:00:07.552 ******* 2026-02-08 03:30:57.747868 | orchestrator | =============================================================================== 2026-02-08 03:30:57.747881 | orchestrator | memcached : Restart memcached container --------------------------------- 2.11s 2026-02-08 03:30:57.747895 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.72s 2026-02-08 03:30:57.747908 | orchestrator | memcached : Check memcached container ----------------------------------- 1.60s 2026-02-08 03:30:57.747921 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.64s 2026-02-08 03:30:57.747934 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.48s 2026-02-08 03:30:57.747948 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-02-08 03:30:57.747962 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-02-08 03:31:00.326309 | orchestrator | 2026-02-08 03:31:00 | INFO  | Task 1f76ba7f-5b58-4b95-9518-2378dd075836 (redis) was prepared for execution. 2026-02-08 03:31:00.326459 | orchestrator | 2026-02-08 03:31:00 | INFO  | It takes a moment until task 1f76ba7f-5b58-4b95-9518-2378dd075836 (redis) has been started and output is visible here. 2026-02-08 03:31:09.543299 | orchestrator | 2026-02-08 03:31:09.543394 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 03:31:09.543407 | orchestrator | 2026-02-08 03:31:09.543415 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 03:31:09.543446 | orchestrator | Sunday 08 February 2026 03:31:04 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-02-08 03:31:09.543454 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:31:09.543463 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:31:09.543470 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:31:09.543478 | orchestrator | 2026-02-08 03:31:09.543485 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 03:31:09.543492 | orchestrator | Sunday 08 February 2026 03:31:04 +0000 (0:00:00.326) 0:00:00.598 ******* 2026-02-08 03:31:09.543499 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-08 03:31:09.543506 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-08 03:31:09.543512 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-08 03:31:09.543519 | orchestrator | 2026-02-08 03:31:09.543526 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-08 03:31:09.543532 | orchestrator | 2026-02-08 03:31:09.543539 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-08 03:31:09.543546 | orchestrator | Sunday 08 February 2026 03:31:05 +0000 (0:00:00.459) 0:00:01.058 ******* 2026-02-08 03:31:09.543553 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:31:09.543561 | orchestrator | 2026-02-08 03:31:09.543567 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-08 03:31:09.543573 | orchestrator | Sunday 08 February 2026 03:31:05 +0000 (0:00:00.481) 0:00:01.540 ******* 2026-02-08 03:31:09.543583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543668 | orchestrator | 2026-02-08 03:31:09.543676 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-08 03:31:09.543682 | orchestrator | Sunday 08 February 2026 03:31:06 +0000 (0:00:01.117) 0:00:02.658 ******* 2026-02-08 03:31:09.543690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:09.543802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567498 | orchestrator | 2026-02-08 03:31:13.567515 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-08 03:31:13.567529 | orchestrator | Sunday 08 February 2026 03:31:09 +0000 (0:00:02.558) 0:00:05.216 ******* 2026-02-08 03:31:13.567541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567675 | orchestrator | 2026-02-08 03:31:13.567686 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-08 03:31:13.567697 | orchestrator | Sunday 08 February 2026 03:31:11 +0000 (0:00:02.358) 0:00:07.575 ******* 2026-02-08 03:31:13.567709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:13.567792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 03:31:28.644684 | orchestrator | 2026-02-08 03:31:28.644786 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-08 03:31:28.644801 | orchestrator | Sunday 08 February 2026 03:31:13 +0000 (0:00:01.467) 0:00:09.042 ******* 2026-02-08 03:31:28.644811 | orchestrator | 2026-02-08 03:31:28.644821 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-08 03:31:28.644832 | orchestrator | Sunday 08 February 2026 03:31:13 +0000 (0:00:00.064) 0:00:09.107 ******* 2026-02-08 03:31:28.644842 | orchestrator | 2026-02-08 03:31:28.644852 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-08 03:31:28.644863 | orchestrator | Sunday 08 February 2026 03:31:13 +0000 (0:00:00.066) 0:00:09.173 ******* 2026-02-08 03:31:28.644873 | orchestrator | 2026-02-08 03:31:28.644955 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-08 03:31:28.644966 | orchestrator | Sunday 08 February 2026 03:31:13 +0000 (0:00:00.065) 0:00:09.239 ******* 2026-02-08 03:31:28.644977 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:31:28.644988 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:31:28.644999 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:31:28.645009 | orchestrator | 2026-02-08 03:31:28.645031 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-08 03:31:28.645064 | orchestrator | Sunday 08 February 2026 03:31:20 +0000 (0:00:06.582) 0:00:15.821 ******* 2026-02-08 03:31:28.645075 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:31:28.645085 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:31:28.645095 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:31:28.645106 | orchestrator | 2026-02-08 03:31:28.645116 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:31:28.645126 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:31:28.645139 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:31:28.645179 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:31:28.645190 | orchestrator | 2026-02-08 03:31:28.645200 | orchestrator | 2026-02-08 03:31:28.645211 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:31:28.645221 | orchestrator | Sunday 08 February 2026 03:31:28 +0000 (0:00:08.096) 0:00:23.917 ******* 2026-02-08 03:31:28.645231 | orchestrator | =============================================================================== 2026-02-08 03:31:28.645242 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.10s 2026-02-08 03:31:28.645252 | orchestrator | redis : Restart redis container ----------------------------------------- 6.58s 2026-02-08 03:31:28.645278 | orchestrator | redis : Copying over default config.json files -------------------------- 2.56s 2026-02-08 03:31:28.645289 | orchestrator | redis : Copying over redis config files --------------------------------- 2.36s 2026-02-08 03:31:28.645299 | orchestrator | redis : Check redis containers ------------------------------------------ 1.47s 2026-02-08 03:31:28.645309 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.12s 2026-02-08 03:31:28.645319 | orchestrator | redis : include_tasks --------------------------------------------------- 0.48s 2026-02-08 03:31:28.645330 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-02-08 03:31:28.645340 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-02-08 03:31:28.645350 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-02-08 03:31:31.147326 | orchestrator | 2026-02-08 03:31:31 | INFO  | Task b2d6cb65-10c7-45ea-8ce0-612763830974 (mariadb) was prepared for execution. 2026-02-08 03:31:31.147406 | orchestrator | 2026-02-08 03:31:31 | INFO  | It takes a moment until task b2d6cb65-10c7-45ea-8ce0-612763830974 (mariadb) has been started and output is visible here. 2026-02-08 03:31:45.597169 | orchestrator | 2026-02-08 03:31:45.597251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 03:31:45.597263 | orchestrator | 2026-02-08 03:31:45.597272 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 03:31:45.597279 | orchestrator | Sunday 08 February 2026 03:31:35 +0000 (0:00:00.229) 0:00:00.229 ******* 2026-02-08 03:31:45.597285 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:31:45.597292 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:31:45.597298 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:31:45.597304 | orchestrator | 2026-02-08 03:31:45.597310 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 03:31:45.597316 | orchestrator | Sunday 08 February 2026 03:31:36 +0000 (0:00:00.332) 0:00:00.562 ******* 2026-02-08 03:31:45.597323 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-08 03:31:45.597329 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-08 03:31:45.597335 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-08 03:31:45.597342 | orchestrator | 2026-02-08 03:31:45.597348 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-08 03:31:45.597354 | orchestrator | 2026-02-08 03:31:45.597361 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-08 03:31:45.597367 | orchestrator | Sunday 08 February 2026 03:31:36 +0000 (0:00:00.660) 0:00:01.222 ******* 2026-02-08 03:31:45.597374 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 03:31:45.597381 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-08 03:31:45.597387 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-08 03:31:45.597394 | orchestrator | 2026-02-08 03:31:45.597400 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-08 03:31:45.597407 | orchestrator | Sunday 08 February 2026 03:31:37 +0000 (0:00:00.528) 0:00:01.751 ******* 2026-02-08 03:31:45.597436 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:31:45.597444 | orchestrator | 2026-02-08 03:31:45.597450 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-08 03:31:45.597457 | orchestrator | Sunday 08 February 2026 03:31:37 +0000 (0:00:00.558) 0:00:02.310 ******* 2026-02-08 03:31:45.597481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 03:31:45.597502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 03:31:45.597511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 03:31:45.597516 | orchestrator | 2026-02-08 03:31:45.597519 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-08 03:31:45.597523 | orchestrator | Sunday 08 February 2026 03:31:40 +0000 (0:00:02.788) 0:00:05.098 ******* 2026-02-08 03:31:45.597527 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:31:45.597535 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:31:45.597539 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:31:45.597542 | orchestrator | 2026-02-08 03:31:45.597546 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-08 03:31:45.597550 | orchestrator | Sunday 08 February 2026 03:31:41 +0000 (0:00:00.689) 0:00:05.787 ******* 2026-02-08 03:31:45.597554 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:31:45.597557 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:31:45.597561 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:31:45.597565 | orchestrator | 2026-02-08 03:31:45.597569 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-08 03:31:45.597572 | orchestrator | Sunday 08 February 2026 03:31:42 +0000 (0:00:01.355) 0:00:07.143 ******* 2026-02-08 03:31:45.597581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 03:31:53.206141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 03:31:53.206254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 03:31:53.206299 | orchestrator | 2026-02-08 03:31:53.206314 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-08 03:31:53.206327 | orchestrator | Sunday 08 February 2026 03:31:45 +0000 (0:00:02.980) 0:00:10.123 ******* 2026-02-08 03:31:53.206338 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:31:53.206350 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:31:53.206361 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:31:53.206372 | orchestrator | 2026-02-08 03:31:53.206383 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-08 03:31:53.206410 | orchestrator | Sunday 08 February 2026 03:31:46 +0000 (0:00:01.078) 0:00:11.202 ******* 2026-02-08 03:31:53.206422 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:31:53.206433 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:31:53.206444 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:31:53.206455 | orchestrator | 2026-02-08 03:31:53.206465 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-08 03:31:53.206476 | orchestrator | Sunday 08 February 2026 03:31:50 +0000 (0:00:03.732) 0:00:14.934 ******* 2026-02-08 03:31:53.206488 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:31:53.206499 | orchestrator | 2026-02-08 03:31:53.206510 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-08 03:31:53.206521 | orchestrator | Sunday 08 February 2026 03:31:50 +0000 (0:00:00.556) 0:00:15.491 ******* 2026-02-08 03:31:53.206539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:31:53.206551 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:31:53.206572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:31:58.143855 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:31:58.144002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:31:58.144020 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:31:58.144029 | orchestrator | 2026-02-08 03:31:58.144038 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-08 03:31:58.144046 | orchestrator | Sunday 08 February 2026 03:31:53 +0000 (0:00:02.243) 0:00:17.735 ******* 2026-02-08 03:31:58.144055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:31:58.144083 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:31:58.144112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:31:58.144121 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:31:58.144129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:31:58.144144 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:31:58.144151 | orchestrator | 2026-02-08 03:31:58.144158 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-08 03:31:58.144166 | orchestrator | Sunday 08 February 2026 03:31:55 +0000 (0:00:02.588) 0:00:20.324 ******* 2026-02-08 03:31:58.144180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:32:01.118715 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:32:01.118827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:32:01.118863 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:32:01.118871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 03:32:01.118878 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:32:01.118886 | orchestrator | 2026-02-08 03:32:01.118893 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-08 03:32:01.118900 | orchestrator | Sunday 08 February 2026 03:31:58 +0000 (0:00:02.350) 0:00:22.675 ******* 2026-02-08 03:32:01.118924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 03:32:01.118938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 03:32:01.118956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 03:34:14.464847 | orchestrator | 2026-02-08 03:34:14.464962 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-08 03:34:14.464990 | orchestrator | Sunday 08 February 2026 03:32:01 +0000 (0:00:02.974) 0:00:25.649 ******* 2026-02-08 03:34:14.465009 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:34:14.465030 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:34:14.465048 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:34:14.465066 | orchestrator | 2026-02-08 03:34:14.465084 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-08 03:34:14.465104 | orchestrator | Sunday 08 February 2026 03:32:01 +0000 (0:00:00.811) 0:00:26.461 ******* 2026-02-08 03:34:14.465123 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:14.465142 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:34:14.465162 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:34:14.465181 | orchestrator | 2026-02-08 03:34:14.465201 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-08 03:34:14.465221 | orchestrator | Sunday 08 February 2026 03:32:02 +0000 (0:00:00.540) 0:00:27.001 ******* 2026-02-08 03:34:14.465239 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:14.465308 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:34:14.465330 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:34:14.465348 | orchestrator | 2026-02-08 03:34:14.465367 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-08 03:34:14.465386 | orchestrator | Sunday 08 February 2026 03:32:02 +0000 (0:00:00.339) 0:00:27.340 ******* 2026-02-08 03:34:14.465406 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-08 03:34:14.465425 | orchestrator | ...ignoring 2026-02-08 03:34:14.465444 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-08 03:34:14.465462 | orchestrator | ...ignoring 2026-02-08 03:34:14.465481 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-08 03:34:14.465498 | orchestrator | ...ignoring 2026-02-08 03:34:14.465515 | orchestrator | 2026-02-08 03:34:14.465532 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-08 03:34:14.465552 | orchestrator | Sunday 08 February 2026 03:32:13 +0000 (0:00:10.855) 0:00:38.196 ******* 2026-02-08 03:34:14.465569 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:14.465587 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:34:14.465604 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:34:14.465621 | orchestrator | 2026-02-08 03:34:14.465639 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-08 03:34:14.465658 | orchestrator | Sunday 08 February 2026 03:32:14 +0000 (0:00:00.414) 0:00:38.610 ******* 2026-02-08 03:34:14.465709 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:14.465730 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:14.465748 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:14.465766 | orchestrator | 2026-02-08 03:34:14.465777 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-08 03:34:14.465788 | orchestrator | Sunday 08 February 2026 03:32:14 +0000 (0:00:00.677) 0:00:39.287 ******* 2026-02-08 03:34:14.465799 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:14.465810 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:14.465838 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:14.465849 | orchestrator | 2026-02-08 03:34:14.465860 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-08 03:34:14.465881 | orchestrator | Sunday 08 February 2026 03:32:15 +0000 (0:00:00.430) 0:00:39.718 ******* 2026-02-08 03:34:14.465892 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:14.465903 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:14.465914 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:14.465925 | orchestrator | 2026-02-08 03:34:14.465936 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-08 03:34:14.465948 | orchestrator | Sunday 08 February 2026 03:32:15 +0000 (0:00:00.426) 0:00:40.144 ******* 2026-02-08 03:34:14.465958 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:14.465969 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:34:14.465980 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:34:14.465990 | orchestrator | 2026-02-08 03:34:14.466061 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-08 03:34:14.466075 | orchestrator | Sunday 08 February 2026 03:32:16 +0000 (0:00:00.432) 0:00:40.576 ******* 2026-02-08 03:34:14.466087 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:14.466098 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:14.466109 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:14.466120 | orchestrator | 2026-02-08 03:34:14.466131 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-08 03:34:14.466142 | orchestrator | Sunday 08 February 2026 03:32:17 +0000 (0:00:00.968) 0:00:41.545 ******* 2026-02-08 03:34:14.466152 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:14.466163 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:14.466175 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-08 03:34:14.466185 | orchestrator | 2026-02-08 03:34:14.466196 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-08 03:34:14.466207 | orchestrator | Sunday 08 February 2026 03:32:17 +0000 (0:00:00.403) 0:00:41.948 ******* 2026-02-08 03:34:14.466218 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:34:14.466229 | orchestrator | 2026-02-08 03:34:14.466240 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-08 03:34:14.466251 | orchestrator | Sunday 08 February 2026 03:32:27 +0000 (0:00:09.771) 0:00:51.720 ******* 2026-02-08 03:34:14.466285 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:14.466305 | orchestrator | 2026-02-08 03:34:14.466316 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-08 03:34:14.466327 | orchestrator | Sunday 08 February 2026 03:32:27 +0000 (0:00:00.126) 0:00:51.846 ******* 2026-02-08 03:34:14.466338 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:14.466368 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:14.466380 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:14.466392 | orchestrator | 2026-02-08 03:34:14.466402 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-08 03:34:14.466413 | orchestrator | Sunday 08 February 2026 03:32:28 +0000 (0:00:01.014) 0:00:52.861 ******* 2026-02-08 03:34:14.466424 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:34:14.466435 | orchestrator | 2026-02-08 03:34:14.466446 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-08 03:34:14.466457 | orchestrator | Sunday 08 February 2026 03:32:35 +0000 (0:00:07.646) 0:01:00.507 ******* 2026-02-08 03:34:14.466477 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:14.466488 | orchestrator | 2026-02-08 03:34:14.466498 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-08 03:34:14.466509 | orchestrator | Sunday 08 February 2026 03:32:37 +0000 (0:00:01.552) 0:01:02.059 ******* 2026-02-08 03:34:14.466520 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:14.466531 | orchestrator | 2026-02-08 03:34:14.466541 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-08 03:34:14.466552 | orchestrator | Sunday 08 February 2026 03:32:39 +0000 (0:00:02.438) 0:01:04.498 ******* 2026-02-08 03:34:14.466563 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:34:14.466573 | orchestrator | 2026-02-08 03:34:14.466584 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-08 03:34:14.466595 | orchestrator | Sunday 08 February 2026 03:32:40 +0000 (0:00:00.124) 0:01:04.622 ******* 2026-02-08 03:34:14.466605 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:14.466616 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:14.466627 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:14.466638 | orchestrator | 2026-02-08 03:34:14.466649 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-08 03:34:14.466660 | orchestrator | Sunday 08 February 2026 03:32:40 +0000 (0:00:00.332) 0:01:04.954 ******* 2026-02-08 03:34:14.466670 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:14.466681 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-08 03:34:14.466692 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:34:14.466703 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:34:14.466714 | orchestrator | 2026-02-08 03:34:14.466725 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-08 03:34:14.466735 | orchestrator | skipping: no hosts matched 2026-02-08 03:34:14.466746 | orchestrator | 2026-02-08 03:34:14.466757 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-08 03:34:14.466768 | orchestrator | 2026-02-08 03:34:14.466779 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-08 03:34:14.466790 | orchestrator | Sunday 08 February 2026 03:32:40 +0000 (0:00:00.578) 0:01:05.533 ******* 2026-02-08 03:34:14.466801 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:34:14.466811 | orchestrator | 2026-02-08 03:34:14.466822 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-08 03:34:14.466833 | orchestrator | Sunday 08 February 2026 03:32:58 +0000 (0:00:17.560) 0:01:23.094 ******* 2026-02-08 03:34:14.466844 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:34:14.466855 | orchestrator | 2026-02-08 03:34:14.466866 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-08 03:34:14.466877 | orchestrator | Sunday 08 February 2026 03:33:14 +0000 (0:00:15.571) 0:01:38.665 ******* 2026-02-08 03:34:14.466887 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:34:14.466898 | orchestrator | 2026-02-08 03:34:14.466909 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-08 03:34:14.466920 | orchestrator | 2026-02-08 03:34:14.466930 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-08 03:34:14.466941 | orchestrator | Sunday 08 February 2026 03:33:16 +0000 (0:00:02.387) 0:01:41.053 ******* 2026-02-08 03:34:14.466956 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:34:14.466968 | orchestrator | 2026-02-08 03:34:14.466980 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-08 03:34:14.466999 | orchestrator | Sunday 08 February 2026 03:33:34 +0000 (0:00:17.822) 0:01:58.875 ******* 2026-02-08 03:34:14.467018 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:34:14.467035 | orchestrator | 2026-02-08 03:34:14.467054 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-08 03:34:14.467072 | orchestrator | Sunday 08 February 2026 03:33:50 +0000 (0:00:16.572) 0:02:15.447 ******* 2026-02-08 03:34:14.467099 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:34:14.467130 | orchestrator | 2026-02-08 03:34:14.467149 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-08 03:34:14.467164 | orchestrator | 2026-02-08 03:34:14.467175 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-08 03:34:14.467186 | orchestrator | Sunday 08 February 2026 03:33:53 +0000 (0:00:02.462) 0:02:17.910 ******* 2026-02-08 03:34:14.467197 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:34:14.467208 | orchestrator | 2026-02-08 03:34:14.467219 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-08 03:34:14.467229 | orchestrator | Sunday 08 February 2026 03:34:10 +0000 (0:00:17.268) 0:02:35.178 ******* 2026-02-08 03:34:14.467240 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:14.467251 | orchestrator | 2026-02-08 03:34:14.467282 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-08 03:34:14.467294 | orchestrator | Sunday 08 February 2026 03:34:11 +0000 (0:00:00.541) 0:02:35.720 ******* 2026-02-08 03:34:14.467305 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:14.467316 | orchestrator | 2026-02-08 03:34:14.467327 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-08 03:34:14.467338 | orchestrator | 2026-02-08 03:34:14.467348 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-08 03:34:14.467359 | orchestrator | Sunday 08 February 2026 03:34:13 +0000 (0:00:02.772) 0:02:38.493 ******* 2026-02-08 03:34:14.467370 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:34:14.467381 | orchestrator | 2026-02-08 03:34:14.467392 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-08 03:34:14.467411 | orchestrator | Sunday 08 February 2026 03:34:14 +0000 (0:00:00.501) 0:02:38.994 ******* 2026-02-08 03:34:26.915784 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:26.915912 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:26.915937 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:34:26.915954 | orchestrator | 2026-02-08 03:34:26.915970 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-08 03:34:26.915987 | orchestrator | Sunday 08 February 2026 03:34:16 +0000 (0:00:02.145) 0:02:41.139 ******* 2026-02-08 03:34:26.916002 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:26.916017 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:26.916032 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:34:26.916046 | orchestrator | 2026-02-08 03:34:26.916061 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-08 03:34:26.916076 | orchestrator | Sunday 08 February 2026 03:34:18 +0000 (0:00:01.991) 0:02:43.131 ******* 2026-02-08 03:34:26.916091 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:26.916106 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:26.916121 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:34:26.916137 | orchestrator | 2026-02-08 03:34:26.916152 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-08 03:34:26.916166 | orchestrator | Sunday 08 February 2026 03:34:20 +0000 (0:00:02.335) 0:02:45.466 ******* 2026-02-08 03:34:26.916182 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:26.916198 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:26.916243 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:34:26.916258 | orchestrator | 2026-02-08 03:34:26.916272 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-08 03:34:26.916313 | orchestrator | Sunday 08 February 2026 03:34:23 +0000 (0:00:02.165) 0:02:47.632 ******* 2026-02-08 03:34:26.916329 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:26.916348 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:34:26.916363 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:34:26.916378 | orchestrator | 2026-02-08 03:34:26.916393 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-08 03:34:26.916409 | orchestrator | Sunday 08 February 2026 03:34:26 +0000 (0:00:02.987) 0:02:50.619 ******* 2026-02-08 03:34:26.916460 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:26.916477 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:34:26.916493 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:34:26.916507 | orchestrator | 2026-02-08 03:34:26.916522 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:34:26.916539 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-08 03:34:26.916557 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-08 03:34:26.916573 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-08 03:34:26.916586 | orchestrator | 2026-02-08 03:34:26.916601 | orchestrator | 2026-02-08 03:34:26.916617 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:34:26.916633 | orchestrator | Sunday 08 February 2026 03:34:26 +0000 (0:00:00.445) 0:02:51.065 ******* 2026-02-08 03:34:26.916647 | orchestrator | =============================================================================== 2026-02-08 03:34:26.916661 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.38s 2026-02-08 03:34:26.916675 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.14s 2026-02-08 03:34:26.916689 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 17.27s 2026-02-08 03:34:26.916703 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.86s 2026-02-08 03:34:26.916718 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.77s 2026-02-08 03:34:26.916734 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.65s 2026-02-08 03:34:26.916748 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.85s 2026-02-08 03:34:26.916783 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.73s 2026-02-08 03:34:26.916801 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.99s 2026-02-08 03:34:26.916814 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.98s 2026-02-08 03:34:26.916829 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.97s 2026-02-08 03:34:26.916843 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.79s 2026-02-08 03:34:26.916857 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.77s 2026-02-08 03:34:26.916871 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.59s 2026-02-08 03:34:26.916887 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.44s 2026-02-08 03:34:26.916902 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.35s 2026-02-08 03:34:26.916916 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.34s 2026-02-08 03:34:26.916930 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.24s 2026-02-08 03:34:26.916946 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.17s 2026-02-08 03:34:26.916961 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.15s 2026-02-08 03:34:29.444652 | orchestrator | 2026-02-08 03:34:29 | INFO  | Task 3644fc17-6e3c-40b0-9cb1-c2c710bb66f9 (rabbitmq) was prepared for execution. 2026-02-08 03:34:29.444725 | orchestrator | 2026-02-08 03:34:29 | INFO  | It takes a moment until task 3644fc17-6e3c-40b0-9cb1-c2c710bb66f9 (rabbitmq) has been started and output is visible here. 2026-02-08 03:34:42.629835 | orchestrator | 2026-02-08 03:34:42.629935 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 03:34:42.629949 | orchestrator | 2026-02-08 03:34:42.629959 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 03:34:42.629988 | orchestrator | Sunday 08 February 2026 03:34:33 +0000 (0:00:00.156) 0:00:00.156 ******* 2026-02-08 03:34:42.629997 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:42.630006 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:34:42.630014 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:34:42.630061 | orchestrator | 2026-02-08 03:34:42.630069 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 03:34:42.630077 | orchestrator | Sunday 08 February 2026 03:34:33 +0000 (0:00:00.289) 0:00:00.445 ******* 2026-02-08 03:34:42.630085 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-08 03:34:42.630094 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-08 03:34:42.630102 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-08 03:34:42.630110 | orchestrator | 2026-02-08 03:34:42.630118 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-08 03:34:42.630126 | orchestrator | 2026-02-08 03:34:42.630134 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-08 03:34:42.630142 | orchestrator | Sunday 08 February 2026 03:34:34 +0000 (0:00:00.454) 0:00:00.899 ******* 2026-02-08 03:34:42.630150 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:34:42.630160 | orchestrator | 2026-02-08 03:34:42.630176 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-08 03:34:42.630184 | orchestrator | Sunday 08 February 2026 03:34:34 +0000 (0:00:00.506) 0:00:01.406 ******* 2026-02-08 03:34:42.630192 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:42.630200 | orchestrator | 2026-02-08 03:34:42.630208 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-08 03:34:42.630216 | orchestrator | Sunday 08 February 2026 03:34:35 +0000 (0:00:01.036) 0:00:02.442 ******* 2026-02-08 03:34:42.630223 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:42.630232 | orchestrator | 2026-02-08 03:34:42.630240 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-08 03:34:42.630248 | orchestrator | Sunday 08 February 2026 03:34:36 +0000 (0:00:00.426) 0:00:02.869 ******* 2026-02-08 03:34:42.630256 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:42.630263 | orchestrator | 2026-02-08 03:34:42.630271 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-08 03:34:42.630279 | orchestrator | Sunday 08 February 2026 03:34:36 +0000 (0:00:00.376) 0:00:03.245 ******* 2026-02-08 03:34:42.630286 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:42.630294 | orchestrator | 2026-02-08 03:34:42.630302 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-08 03:34:42.630310 | orchestrator | Sunday 08 February 2026 03:34:36 +0000 (0:00:00.381) 0:00:03.627 ******* 2026-02-08 03:34:42.630370 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:42.630380 | orchestrator | 2026-02-08 03:34:42.630390 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-08 03:34:42.630399 | orchestrator | Sunday 08 February 2026 03:34:37 +0000 (0:00:00.624) 0:00:04.252 ******* 2026-02-08 03:34:42.630408 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:34:42.630418 | orchestrator | 2026-02-08 03:34:42.630427 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-08 03:34:42.630436 | orchestrator | Sunday 08 February 2026 03:34:38 +0000 (0:00:00.899) 0:00:05.151 ******* 2026-02-08 03:34:42.630445 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:34:42.630454 | orchestrator | 2026-02-08 03:34:42.630463 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-08 03:34:42.630472 | orchestrator | Sunday 08 February 2026 03:34:39 +0000 (0:00:00.852) 0:00:06.004 ******* 2026-02-08 03:34:42.630484 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:42.630498 | orchestrator | 2026-02-08 03:34:42.630544 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-08 03:34:42.630563 | orchestrator | Sunday 08 February 2026 03:34:39 +0000 (0:00:00.401) 0:00:06.405 ******* 2026-02-08 03:34:42.630576 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:34:42.630590 | orchestrator | 2026-02-08 03:34:42.630605 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-08 03:34:42.630618 | orchestrator | Sunday 08 February 2026 03:34:40 +0000 (0:00:00.379) 0:00:06.785 ******* 2026-02-08 03:34:42.630660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:34:42.630680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:34:42.630696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:34:42.630710 | orchestrator | 2026-02-08 03:34:42.630725 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-08 03:34:42.630739 | orchestrator | Sunday 08 February 2026 03:34:40 +0000 (0:00:00.883) 0:00:07.668 ******* 2026-02-08 03:34:42.630761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:34:42.630787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:35:00.762155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:35:00.762265 | orchestrator | 2026-02-08 03:35:00.762280 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-08 03:35:00.762291 | orchestrator | Sunday 08 February 2026 03:34:42 +0000 (0:00:01.730) 0:00:09.399 ******* 2026-02-08 03:35:00.762300 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-08 03:35:00.762311 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-08 03:35:00.762320 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-08 03:35:00.762329 | orchestrator | 2026-02-08 03:35:00.762338 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-08 03:35:00.762370 | orchestrator | Sunday 08 February 2026 03:34:44 +0000 (0:00:01.519) 0:00:10.918 ******* 2026-02-08 03:35:00.762407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-08 03:35:00.762417 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-08 03:35:00.762426 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-08 03:35:00.762434 | orchestrator | 2026-02-08 03:35:00.762443 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-08 03:35:00.762452 | orchestrator | Sunday 08 February 2026 03:34:45 +0000 (0:00:01.798) 0:00:12.716 ******* 2026-02-08 03:35:00.762460 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-08 03:35:00.762481 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-08 03:35:00.762490 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-08 03:35:00.762499 | orchestrator | 2026-02-08 03:35:00.762507 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-08 03:35:00.762516 | orchestrator | Sunday 08 February 2026 03:34:47 +0000 (0:00:01.327) 0:00:14.044 ******* 2026-02-08 03:35:00.762524 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-08 03:35:00.762533 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-08 03:35:00.762543 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-08 03:35:00.762557 | orchestrator | 2026-02-08 03:35:00.762571 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-08 03:35:00.762586 | orchestrator | Sunday 08 February 2026 03:34:48 +0000 (0:00:01.607) 0:00:15.651 ******* 2026-02-08 03:35:00.762600 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-08 03:35:00.762615 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-08 03:35:00.762630 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-08 03:35:00.762644 | orchestrator | 2026-02-08 03:35:00.762658 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-08 03:35:00.762672 | orchestrator | Sunday 08 February 2026 03:34:50 +0000 (0:00:01.348) 0:00:17.000 ******* 2026-02-08 03:35:00.762687 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-08 03:35:00.762700 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-08 03:35:00.762714 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-08 03:35:00.762730 | orchestrator | 2026-02-08 03:35:00.762744 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-08 03:35:00.762759 | orchestrator | Sunday 08 February 2026 03:34:51 +0000 (0:00:01.348) 0:00:18.348 ******* 2026-02-08 03:35:00.762774 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:35:00.762792 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:35:00.762827 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:35:00.762843 | orchestrator | 2026-02-08 03:35:00.762858 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-08 03:35:00.762874 | orchestrator | Sunday 08 February 2026 03:34:52 +0000 (0:00:00.446) 0:00:18.795 ******* 2026-02-08 03:35:00.762893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:35:00.762927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:35:00.762954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 03:35:00.762971 | orchestrator | 2026-02-08 03:35:00.762985 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-08 03:35:00.762998 | orchestrator | Sunday 08 February 2026 03:34:53 +0000 (0:00:01.228) 0:00:20.023 ******* 2026-02-08 03:35:00.763013 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:35:00.763029 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:35:00.763042 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:35:00.763057 | orchestrator | 2026-02-08 03:35:00.763072 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-08 03:35:00.763086 | orchestrator | Sunday 08 February 2026 03:34:54 +0000 (0:00:00.794) 0:00:20.817 ******* 2026-02-08 03:35:00.763100 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:35:00.763114 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:35:00.763128 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:35:00.763142 | orchestrator | 2026-02-08 03:35:00.763156 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-08 03:35:00.763183 | orchestrator | Sunday 08 February 2026 03:35:00 +0000 (0:00:06.711) 0:00:27.529 ******* 2026-02-08 03:36:32.552894 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:36:32.553058 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:36:32.553081 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:36:32.553097 | orchestrator | 2026-02-08 03:36:32.553115 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-08 03:36:32.553132 | orchestrator | 2026-02-08 03:36:32.553147 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-08 03:36:32.553162 | orchestrator | Sunday 08 February 2026 03:35:01 +0000 (0:00:00.584) 0:00:28.113 ******* 2026-02-08 03:36:32.553178 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:36:32.553194 | orchestrator | 2026-02-08 03:36:32.553211 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-08 03:36:32.553228 | orchestrator | Sunday 08 February 2026 03:35:01 +0000 (0:00:00.583) 0:00:28.696 ******* 2026-02-08 03:36:32.553246 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:36:32.553263 | orchestrator | 2026-02-08 03:36:32.553279 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-08 03:36:32.553295 | orchestrator | Sunday 08 February 2026 03:35:02 +0000 (0:00:00.239) 0:00:28.936 ******* 2026-02-08 03:36:32.553309 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:36:32.553325 | orchestrator | 2026-02-08 03:36:32.553341 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-08 03:36:32.553356 | orchestrator | Sunday 08 February 2026 03:35:03 +0000 (0:00:01.566) 0:00:30.502 ******* 2026-02-08 03:36:32.553371 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:36:32.553387 | orchestrator | 2026-02-08 03:36:32.553403 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-08 03:36:32.553419 | orchestrator | 2026-02-08 03:36:32.553435 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-08 03:36:32.553451 | orchestrator | Sunday 08 February 2026 03:35:57 +0000 (0:00:53.373) 0:01:23.876 ******* 2026-02-08 03:36:32.553468 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:36:32.553485 | orchestrator | 2026-02-08 03:36:32.553535 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-08 03:36:32.553556 | orchestrator | Sunday 08 February 2026 03:35:57 +0000 (0:00:00.600) 0:01:24.476 ******* 2026-02-08 03:36:32.553573 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:36:32.553591 | orchestrator | 2026-02-08 03:36:32.553610 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-08 03:36:32.553628 | orchestrator | Sunday 08 February 2026 03:35:57 +0000 (0:00:00.233) 0:01:24.710 ******* 2026-02-08 03:36:32.553648 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:36:32.553668 | orchestrator | 2026-02-08 03:36:32.553688 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-08 03:36:32.553708 | orchestrator | Sunday 08 February 2026 03:35:59 +0000 (0:00:01.547) 0:01:26.258 ******* 2026-02-08 03:36:32.553728 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:36:32.553747 | orchestrator | 2026-02-08 03:36:32.553765 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-08 03:36:32.553783 | orchestrator | 2026-02-08 03:36:32.553801 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-08 03:36:32.553818 | orchestrator | Sunday 08 February 2026 03:36:12 +0000 (0:00:13.468) 0:01:39.726 ******* 2026-02-08 03:36:32.553836 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:36:32.553854 | orchestrator | 2026-02-08 03:36:32.553872 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-08 03:36:32.553889 | orchestrator | Sunday 08 February 2026 03:36:13 +0000 (0:00:00.847) 0:01:40.574 ******* 2026-02-08 03:36:32.553926 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:36:32.553945 | orchestrator | 2026-02-08 03:36:32.553962 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-08 03:36:32.553980 | orchestrator | Sunday 08 February 2026 03:36:14 +0000 (0:00:00.280) 0:01:40.855 ******* 2026-02-08 03:36:32.553997 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:36:32.554080 | orchestrator | 2026-02-08 03:36:32.554101 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-08 03:36:32.554137 | orchestrator | Sunday 08 February 2026 03:36:20 +0000 (0:00:06.575) 0:01:47.431 ******* 2026-02-08 03:36:32.554156 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:36:32.554174 | orchestrator | 2026-02-08 03:36:32.554193 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-08 03:36:32.554209 | orchestrator | 2026-02-08 03:36:32.554226 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-08 03:36:32.554242 | orchestrator | Sunday 08 February 2026 03:36:29 +0000 (0:00:08.832) 0:01:56.263 ******* 2026-02-08 03:36:32.554258 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:36:32.554274 | orchestrator | 2026-02-08 03:36:32.554292 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-08 03:36:32.554310 | orchestrator | Sunday 08 February 2026 03:36:30 +0000 (0:00:00.525) 0:01:56.789 ******* 2026-02-08 03:36:32.554327 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-08 03:36:32.554343 | orchestrator | enable_outward_rabbitmq_True 2026-02-08 03:36:32.554360 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-08 03:36:32.554376 | orchestrator | outward_rabbitmq_restart 2026-02-08 03:36:32.554418 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:36:32.554435 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:36:32.554450 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:36:32.554465 | orchestrator | 2026-02-08 03:36:32.554481 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-08 03:36:32.554495 | orchestrator | skipping: no hosts matched 2026-02-08 03:36:32.554539 | orchestrator | 2026-02-08 03:36:32.554585 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-08 03:36:32.554602 | orchestrator | skipping: no hosts matched 2026-02-08 03:36:32.554618 | orchestrator | 2026-02-08 03:36:32.554636 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-08 03:36:32.554651 | orchestrator | skipping: no hosts matched 2026-02-08 03:36:32.554667 | orchestrator | 2026-02-08 03:36:32.554685 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:36:32.554730 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-08 03:36:32.554749 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:36:32.554764 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:36:32.554779 | orchestrator | 2026-02-08 03:36:32.554795 | orchestrator | 2026-02-08 03:36:32.554809 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:36:32.554824 | orchestrator | Sunday 08 February 2026 03:36:32 +0000 (0:00:02.160) 0:01:58.950 ******* 2026-02-08 03:36:32.554840 | orchestrator | =============================================================================== 2026-02-08 03:36:32.554857 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 75.67s 2026-02-08 03:36:32.554872 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.69s 2026-02-08 03:36:32.554886 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.71s 2026-02-08 03:36:32.554900 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.16s 2026-02-08 03:36:32.554915 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.03s 2026-02-08 03:36:32.554931 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.80s 2026-02-08 03:36:32.554947 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.73s 2026-02-08 03:36:32.554964 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.61s 2026-02-08 03:36:32.554998 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.52s 2026-02-08 03:36:32.555015 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.35s 2026-02-08 03:36:32.555032 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.35s 2026-02-08 03:36:32.555048 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.33s 2026-02-08 03:36:32.555066 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.23s 2026-02-08 03:36:32.555084 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.04s 2026-02-08 03:36:32.555099 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.90s 2026-02-08 03:36:32.555114 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.88s 2026-02-08 03:36:32.555129 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.85s 2026-02-08 03:36:32.555145 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.79s 2026-02-08 03:36:32.555161 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.75s 2026-02-08 03:36:32.555178 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.62s 2026-02-08 03:36:35.106877 | orchestrator | 2026-02-08 03:36:35 | INFO  | Task 8495122d-03ee-40bb-b3dc-3d92b9e337eb (openvswitch) was prepared for execution. 2026-02-08 03:36:35.106967 | orchestrator | 2026-02-08 03:36:35 | INFO  | It takes a moment until task 8495122d-03ee-40bb-b3dc-3d92b9e337eb (openvswitch) has been started and output is visible here. 2026-02-08 03:36:48.209158 | orchestrator | 2026-02-08 03:36:48.209306 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 03:36:48.209335 | orchestrator | 2026-02-08 03:36:48.209357 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 03:36:48.209378 | orchestrator | Sunday 08 February 2026 03:36:39 +0000 (0:00:00.285) 0:00:00.285 ******* 2026-02-08 03:36:48.209396 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:36:48.209415 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:36:48.209432 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:36:48.209450 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:36:48.209468 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:36:48.209484 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:36:48.209502 | orchestrator | 2026-02-08 03:36:48.209520 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 03:36:48.209599 | orchestrator | Sunday 08 February 2026 03:36:40 +0000 (0:00:00.769) 0:00:01.054 ******* 2026-02-08 03:36:48.209619 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 03:36:48.209638 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 03:36:48.209656 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 03:36:48.209674 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 03:36:48.209692 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 03:36:48.209711 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 03:36:48.209730 | orchestrator | 2026-02-08 03:36:48.209752 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-08 03:36:48.209772 | orchestrator | 2026-02-08 03:36:48.209792 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-08 03:36:48.209810 | orchestrator | Sunday 08 February 2026 03:36:40 +0000 (0:00:00.629) 0:00:01.683 ******* 2026-02-08 03:36:48.209832 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:36:48.209853 | orchestrator | 2026-02-08 03:36:48.209875 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-08 03:36:48.209929 | orchestrator | Sunday 08 February 2026 03:36:42 +0000 (0:00:01.226) 0:00:02.910 ******* 2026-02-08 03:36:48.209950 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-08 03:36:48.209970 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-08 03:36:48.209992 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-08 03:36:48.210012 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-08 03:36:48.210107 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-08 03:36:48.210127 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-08 03:36:48.210147 | orchestrator | 2026-02-08 03:36:48.210167 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-08 03:36:48.210188 | orchestrator | Sunday 08 February 2026 03:36:43 +0000 (0:00:01.265) 0:00:04.176 ******* 2026-02-08 03:36:48.210209 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-08 03:36:48.210230 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-08 03:36:48.210251 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-08 03:36:48.210272 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-08 03:36:48.210292 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-08 03:36:48.210312 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-08 03:36:48.210330 | orchestrator | 2026-02-08 03:36:48.210349 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-08 03:36:48.210370 | orchestrator | Sunday 08 February 2026 03:36:44 +0000 (0:00:01.510) 0:00:05.686 ******* 2026-02-08 03:36:48.210391 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-08 03:36:48.210412 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:36:48.210433 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-08 03:36:48.210453 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:36:48.210503 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-08 03:36:48.210526 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:36:48.210570 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-08 03:36:48.210588 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:36:48.210607 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-08 03:36:48.210625 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:36:48.210641 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-08 03:36:48.210660 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:36:48.210677 | orchestrator | 2026-02-08 03:36:48.210695 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-08 03:36:48.210713 | orchestrator | Sunday 08 February 2026 03:36:46 +0000 (0:00:01.241) 0:00:06.927 ******* 2026-02-08 03:36:48.210733 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:36:48.210751 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:36:48.210769 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:36:48.210788 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:36:48.210806 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:36:48.210824 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:36:48.210841 | orchestrator | 2026-02-08 03:36:48.210861 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-08 03:36:48.210900 | orchestrator | Sunday 08 February 2026 03:36:46 +0000 (0:00:00.741) 0:00:07.668 ******* 2026-02-08 03:36:48.210955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:48.210999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:48.211018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:48.211036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:48.211055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:48.211086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535660 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535703 | orchestrator | 2026-02-08 03:36:50.535713 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-08 03:36:50.535722 | orchestrator | Sunday 08 February 2026 03:36:48 +0000 (0:00:01.422) 0:00:09.090 ******* 2026-02-08 03:36:50.535730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:50.535785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168269 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168358 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168371 | orchestrator | 2026-02-08 03:36:53.168385 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-08 03:36:53.168397 | orchestrator | Sunday 08 February 2026 03:36:50 +0000 (0:00:02.301) 0:00:11.392 ******* 2026-02-08 03:36:53.168408 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:36:53.168420 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:36:53.168431 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:36:53.168442 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:36:53.168452 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:36:53.168463 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:36:53.168474 | orchestrator | 2026-02-08 03:36:53.168494 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-08 03:36:53.168511 | orchestrator | Sunday 08 February 2026 03:36:51 +0000 (0:00:00.973) 0:00:12.366 ******* 2026-02-08 03:36:53.168532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168588 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:36:53.168690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:37:17.854742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 03:37:17.854822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:37:17.854830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:37:17.854836 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:37:17.854857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:37:17.854874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:37:17.854879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 03:37:17.854884 | orchestrator | 2026-02-08 03:37:17.854890 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 03:37:17.854896 | orchestrator | Sunday 08 February 2026 03:36:53 +0000 (0:00:01.667) 0:00:14.033 ******* 2026-02-08 03:37:17.854901 | orchestrator | 2026-02-08 03:37:17.854906 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 03:37:17.854911 | orchestrator | Sunday 08 February 2026 03:36:53 +0000 (0:00:00.321) 0:00:14.355 ******* 2026-02-08 03:37:17.854915 | orchestrator | 2026-02-08 03:37:17.854920 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 03:37:17.854925 | orchestrator | Sunday 08 February 2026 03:36:53 +0000 (0:00:00.136) 0:00:14.491 ******* 2026-02-08 03:37:17.854930 | orchestrator | 2026-02-08 03:37:17.854934 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 03:37:17.854939 | orchestrator | Sunday 08 February 2026 03:36:53 +0000 (0:00:00.135) 0:00:14.627 ******* 2026-02-08 03:37:17.854944 | orchestrator | 2026-02-08 03:37:17.854948 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 03:37:17.854953 | orchestrator | Sunday 08 February 2026 03:36:53 +0000 (0:00:00.142) 0:00:14.769 ******* 2026-02-08 03:37:17.854963 | orchestrator | 2026-02-08 03:37:17.854968 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 03:37:17.854973 | orchestrator | Sunday 08 February 2026 03:36:54 +0000 (0:00:00.138) 0:00:14.907 ******* 2026-02-08 03:37:17.854977 | orchestrator | 2026-02-08 03:37:17.854982 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-08 03:37:17.854987 | orchestrator | Sunday 08 February 2026 03:36:54 +0000 (0:00:00.132) 0:00:15.040 ******* 2026-02-08 03:37:17.854992 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:37:17.854998 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:37:17.855002 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:37:17.855007 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:37:17.855012 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:37:17.855016 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:37:17.855021 | orchestrator | 2026-02-08 03:37:17.855026 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-08 03:37:17.855031 | orchestrator | Sunday 08 February 2026 03:37:03 +0000 (0:00:08.824) 0:00:23.865 ******* 2026-02-08 03:37:17.855036 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:37:17.855042 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:37:17.855047 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:37:17.855060 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:37:17.855068 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:37:17.855075 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:37:17.855082 | orchestrator | 2026-02-08 03:37:17.855089 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-08 03:37:17.855096 | orchestrator | Sunday 08 February 2026 03:37:04 +0000 (0:00:01.127) 0:00:24.992 ******* 2026-02-08 03:37:17.855103 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:37:17.855110 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:37:17.855117 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:37:17.855124 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:37:17.855131 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:37:17.855141 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:37:17.855149 | orchestrator | 2026-02-08 03:37:17.855156 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-08 03:37:17.855163 | orchestrator | Sunday 08 February 2026 03:37:11 +0000 (0:00:07.628) 0:00:32.621 ******* 2026-02-08 03:37:17.855169 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-08 03:37:17.855177 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-08 03:37:17.855185 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-08 03:37:17.855193 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-08 03:37:17.855202 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-08 03:37:17.855210 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-08 03:37:17.855218 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-08 03:37:17.855231 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-08 03:37:30.787739 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-08 03:37:30.787857 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-08 03:37:30.787883 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-08 03:37:30.787939 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-08 03:37:30.787960 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 03:37:30.787977 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 03:37:30.787997 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 03:37:30.788015 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 03:37:30.788034 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 03:37:30.788053 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 03:37:30.788073 | orchestrator | 2026-02-08 03:37:30.788126 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-08 03:37:30.788140 | orchestrator | Sunday 08 February 2026 03:37:17 +0000 (0:00:05.997) 0:00:38.618 ******* 2026-02-08 03:37:30.788164 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-08 03:37:30.788177 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:37:30.788192 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-08 03:37:30.788204 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:37:30.788218 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-08 03:37:30.788230 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:37:30.788242 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-08 03:37:30.788269 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-08 03:37:30.788288 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-08 03:37:30.788308 | orchestrator | 2026-02-08 03:37:30.788326 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-08 03:37:30.788345 | orchestrator | Sunday 08 February 2026 03:37:20 +0000 (0:00:02.295) 0:00:40.914 ******* 2026-02-08 03:37:30.788363 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-08 03:37:30.788381 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:37:30.788398 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-08 03:37:30.788415 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:37:30.788432 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-08 03:37:30.788447 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:37:30.788464 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-08 03:37:30.788479 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-08 03:37:30.788498 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-08 03:37:30.788518 | orchestrator | 2026-02-08 03:37:30.788539 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-08 03:37:30.788560 | orchestrator | Sunday 08 February 2026 03:37:23 +0000 (0:00:02.971) 0:00:43.886 ******* 2026-02-08 03:37:30.788577 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:37:30.788624 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:37:30.788644 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:37:30.788664 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:37:30.788681 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:37:30.788699 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:37:30.788710 | orchestrator | 2026-02-08 03:37:30.788723 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:37:30.788764 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 03:37:30.788785 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 03:37:30.788822 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 03:37:30.788843 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 03:37:30.788880 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 03:37:30.788893 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 03:37:30.788904 | orchestrator | 2026-02-08 03:37:30.788914 | orchestrator | 2026-02-08 03:37:30.788937 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:37:30.788949 | orchestrator | Sunday 08 February 2026 03:37:30 +0000 (0:00:07.231) 0:00:51.117 ******* 2026-02-08 03:37:30.788982 | orchestrator | =============================================================================== 2026-02-08 03:37:30.788993 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.86s 2026-02-08 03:37:30.789004 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.82s 2026-02-08 03:37:30.789015 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.00s 2026-02-08 03:37:30.789026 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.97s 2026-02-08 03:37:30.789037 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.30s 2026-02-08 03:37:30.789048 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.30s 2026-02-08 03:37:30.789058 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.67s 2026-02-08 03:37:30.789069 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.51s 2026-02-08 03:37:30.789080 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.42s 2026-02-08 03:37:30.789091 | orchestrator | module-load : Load modules ---------------------------------------------- 1.27s 2026-02-08 03:37:30.789101 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.24s 2026-02-08 03:37:30.789112 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.23s 2026-02-08 03:37:30.789123 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.13s 2026-02-08 03:37:30.789134 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.01s 2026-02-08 03:37:30.789145 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.97s 2026-02-08 03:37:30.789156 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2026-02-08 03:37:30.789175 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.74s 2026-02-08 03:37:30.789189 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-02-08 03:37:33.182867 | orchestrator | 2026-02-08 03:37:33 | INFO  | Task 660f24b7-7105-4e70-b3a2-a7999ef3402b (ovn) was prepared for execution. 2026-02-08 03:37:33.182987 | orchestrator | 2026-02-08 03:37:33 | INFO  | It takes a moment until task 660f24b7-7105-4e70-b3a2-a7999ef3402b (ovn) has been started and output is visible here. 2026-02-08 03:37:44.073900 | orchestrator | 2026-02-08 03:37:44.074104 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 03:37:44.074134 | orchestrator | 2026-02-08 03:37:44.074153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 03:37:44.074172 | orchestrator | Sunday 08 February 2026 03:37:37 +0000 (0:00:00.170) 0:00:00.170 ******* 2026-02-08 03:37:44.074190 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:37:44.074208 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:37:44.074256 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:37:44.074274 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:37:44.074289 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:37:44.074307 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:37:44.074323 | orchestrator | 2026-02-08 03:37:44.074340 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 03:37:44.074357 | orchestrator | Sunday 08 February 2026 03:37:38 +0000 (0:00:00.710) 0:00:00.880 ******* 2026-02-08 03:37:44.074459 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-08 03:37:44.074479 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-08 03:37:44.074499 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-08 03:37:44.074517 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-08 03:37:44.074537 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-08 03:37:44.074555 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-08 03:37:44.074573 | orchestrator | 2026-02-08 03:37:44.074592 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-08 03:37:44.074673 | orchestrator | 2026-02-08 03:37:44.074692 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-08 03:37:44.074726 | orchestrator | Sunday 08 February 2026 03:37:39 +0000 (0:00:00.844) 0:00:01.724 ******* 2026-02-08 03:37:44.074746 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:37:44.074766 | orchestrator | 2026-02-08 03:37:44.074784 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-08 03:37:44.074800 | orchestrator | Sunday 08 February 2026 03:37:40 +0000 (0:00:01.179) 0:00:02.904 ******* 2026-02-08 03:37:44.074820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.074840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.074858 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.074875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.074892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.074946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.074965 | orchestrator | 2026-02-08 03:37:44.074982 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-08 03:37:44.074998 | orchestrator | Sunday 08 February 2026 03:37:41 +0000 (0:00:01.224) 0:00:04.129 ******* 2026-02-08 03:37:44.075015 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.075033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.075057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.075075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.075092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.075108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.075124 | orchestrator | 2026-02-08 03:37:44.075140 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-08 03:37:44.075156 | orchestrator | Sunday 08 February 2026 03:37:42 +0000 (0:00:01.421) 0:00:05.551 ******* 2026-02-08 03:37:44.075174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.075202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:37:44.075232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420552 | orchestrator | 2026-02-08 03:38:07.420572 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-08 03:38:07.420593 | orchestrator | Sunday 08 February 2026 03:37:44 +0000 (0:00:01.145) 0:00:06.697 ******* 2026-02-08 03:38:07.420613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420771 | orchestrator | 2026-02-08 03:38:07.420782 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-08 03:38:07.420794 | orchestrator | Sunday 08 February 2026 03:37:45 +0000 (0:00:01.529) 0:00:08.226 ******* 2026-02-08 03:38:07.420805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:07.420928 | orchestrator | 2026-02-08 03:38:07.420941 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-08 03:38:07.420955 | orchestrator | Sunday 08 February 2026 03:37:46 +0000 (0:00:01.337) 0:00:09.563 ******* 2026-02-08 03:38:07.420968 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:38:07.420983 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:38:07.420995 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:38:07.421009 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:38:07.421021 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:38:07.421034 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:38:07.421047 | orchestrator | 2026-02-08 03:38:07.421060 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-08 03:38:07.421075 | orchestrator | Sunday 08 February 2026 03:37:49 +0000 (0:00:02.329) 0:00:11.893 ******* 2026-02-08 03:38:07.421095 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-08 03:38:07.421115 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-08 03:38:07.421144 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-08 03:38:07.421165 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-08 03:38:07.421183 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-08 03:38:07.421200 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-08 03:38:07.421232 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 03:38:41.967022 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 03:38:41.967152 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 03:38:41.967174 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 03:38:41.967191 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 03:38:41.967206 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 03:38:41.967223 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-08 03:38:41.967242 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-08 03:38:41.967279 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-08 03:38:41.967298 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-08 03:38:41.967308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-08 03:38:41.967316 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-08 03:38:41.967349 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 03:38:41.967359 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 03:38:41.967368 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 03:38:41.967376 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 03:38:41.967385 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 03:38:41.967393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 03:38:41.967402 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 03:38:41.967410 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 03:38:41.967419 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 03:38:41.967428 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 03:38:41.967437 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 03:38:41.967445 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 03:38:41.967454 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 03:38:41.967462 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 03:38:41.967471 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 03:38:41.967479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 03:38:41.967488 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 03:38:41.967496 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 03:38:41.967505 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-08 03:38:41.967516 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-08 03:38:41.967527 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-08 03:38:41.967537 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-08 03:38:41.967546 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-08 03:38:41.967556 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-08 03:38:41.967566 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-08 03:38:41.967603 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-08 03:38:41.967617 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-08 03:38:41.967632 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-08 03:38:41.967646 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-08 03:38:41.967672 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-08 03:38:41.967760 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-08 03:38:41.967788 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-08 03:38:41.967805 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-08 03:38:41.967822 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-08 03:38:41.967837 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-08 03:38:41.967852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-08 03:38:41.967868 | orchestrator | 2026-02-08 03:38:41.967884 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 03:38:41.967901 | orchestrator | Sunday 08 February 2026 03:38:06 +0000 (0:00:17.569) 0:00:29.463 ******* 2026-02-08 03:38:41.967910 | orchestrator | 2026-02-08 03:38:41.967919 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 03:38:41.967928 | orchestrator | Sunday 08 February 2026 03:38:07 +0000 (0:00:00.239) 0:00:29.702 ******* 2026-02-08 03:38:41.967936 | orchestrator | 2026-02-08 03:38:41.967945 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 03:38:41.967957 | orchestrator | Sunday 08 February 2026 03:38:07 +0000 (0:00:00.063) 0:00:29.766 ******* 2026-02-08 03:38:41.967970 | orchestrator | 2026-02-08 03:38:41.967984 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 03:38:41.967998 | orchestrator | Sunday 08 February 2026 03:38:07 +0000 (0:00:00.072) 0:00:29.838 ******* 2026-02-08 03:38:41.968012 | orchestrator | 2026-02-08 03:38:41.968026 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 03:38:41.968041 | orchestrator | Sunday 08 February 2026 03:38:07 +0000 (0:00:00.063) 0:00:29.902 ******* 2026-02-08 03:38:41.968056 | orchestrator | 2026-02-08 03:38:41.968070 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 03:38:41.968084 | orchestrator | Sunday 08 February 2026 03:38:07 +0000 (0:00:00.065) 0:00:29.967 ******* 2026-02-08 03:38:41.968100 | orchestrator | 2026-02-08 03:38:41.968116 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-08 03:38:41.968131 | orchestrator | Sunday 08 February 2026 03:38:07 +0000 (0:00:00.064) 0:00:30.032 ******* 2026-02-08 03:38:41.968147 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:38:41.968163 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:38:41.968178 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:38:41.968194 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:38:41.968210 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:38:41.968224 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:38:41.968239 | orchestrator | 2026-02-08 03:38:41.968252 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-08 03:38:41.968264 | orchestrator | Sunday 08 February 2026 03:38:08 +0000 (0:00:01.520) 0:00:31.552 ******* 2026-02-08 03:38:41.968279 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:38:41.968295 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:38:41.968310 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:38:41.968325 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:38:41.968340 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:38:41.968355 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:38:41.968371 | orchestrator | 2026-02-08 03:38:41.968387 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-08 03:38:41.968403 | orchestrator | 2026-02-08 03:38:41.968418 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-08 03:38:41.968446 | orchestrator | Sunday 08 February 2026 03:38:39 +0000 (0:00:30.750) 0:01:02.303 ******* 2026-02-08 03:38:41.968456 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:38:41.968464 | orchestrator | 2026-02-08 03:38:41.968473 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-08 03:38:41.968486 | orchestrator | Sunday 08 February 2026 03:38:40 +0000 (0:00:00.783) 0:01:03.086 ******* 2026-02-08 03:38:41.968500 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:38:41.968515 | orchestrator | 2026-02-08 03:38:41.968529 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-08 03:38:41.968541 | orchestrator | Sunday 08 February 2026 03:38:41 +0000 (0:00:00.557) 0:01:03.644 ******* 2026-02-08 03:38:41.968555 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:38:41.968567 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:38:41.968580 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:38:41.968593 | orchestrator | 2026-02-08 03:38:41.968606 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-08 03:38:41.968636 | orchestrator | Sunday 08 February 2026 03:38:41 +0000 (0:00:00.939) 0:01:04.584 ******* 2026-02-08 03:38:53.341147 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:38:53.341229 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:38:53.341237 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:38:53.341243 | orchestrator | 2026-02-08 03:38:53.341250 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-08 03:38:53.341256 | orchestrator | Sunday 08 February 2026 03:38:42 +0000 (0:00:00.330) 0:01:04.914 ******* 2026-02-08 03:38:53.341262 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:38:53.341267 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:38:53.341272 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:38:53.341277 | orchestrator | 2026-02-08 03:38:53.341283 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-08 03:38:53.341289 | orchestrator | Sunday 08 February 2026 03:38:42 +0000 (0:00:00.316) 0:01:05.231 ******* 2026-02-08 03:38:53.341294 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:38:53.341299 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:38:53.341304 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:38:53.341309 | orchestrator | 2026-02-08 03:38:53.341314 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-08 03:38:53.341331 | orchestrator | Sunday 08 February 2026 03:38:42 +0000 (0:00:00.350) 0:01:05.581 ******* 2026-02-08 03:38:53.341336 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:38:53.341341 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:38:53.341346 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:38:53.341351 | orchestrator | 2026-02-08 03:38:53.341356 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-08 03:38:53.341362 | orchestrator | Sunday 08 February 2026 03:38:43 +0000 (0:00:00.527) 0:01:06.109 ******* 2026-02-08 03:38:53.341367 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341373 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341378 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341384 | orchestrator | 2026-02-08 03:38:53.341389 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-08 03:38:53.341394 | orchestrator | Sunday 08 February 2026 03:38:43 +0000 (0:00:00.284) 0:01:06.393 ******* 2026-02-08 03:38:53.341399 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341404 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341410 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341415 | orchestrator | 2026-02-08 03:38:53.341420 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-08 03:38:53.341425 | orchestrator | Sunday 08 February 2026 03:38:44 +0000 (0:00:00.299) 0:01:06.692 ******* 2026-02-08 03:38:53.341433 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341458 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341464 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341469 | orchestrator | 2026-02-08 03:38:53.341476 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-08 03:38:53.341484 | orchestrator | Sunday 08 February 2026 03:38:44 +0000 (0:00:00.331) 0:01:07.024 ******* 2026-02-08 03:38:53.341492 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341500 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341508 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341516 | orchestrator | 2026-02-08 03:38:53.341523 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-08 03:38:53.341528 | orchestrator | Sunday 08 February 2026 03:38:44 +0000 (0:00:00.301) 0:01:07.325 ******* 2026-02-08 03:38:53.341533 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341538 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341543 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341548 | orchestrator | 2026-02-08 03:38:53.341553 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-08 03:38:53.341558 | orchestrator | Sunday 08 February 2026 03:38:45 +0000 (0:00:00.533) 0:01:07.859 ******* 2026-02-08 03:38:53.341563 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341569 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341574 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341579 | orchestrator | 2026-02-08 03:38:53.341584 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-08 03:38:53.341589 | orchestrator | Sunday 08 February 2026 03:38:45 +0000 (0:00:00.295) 0:01:08.154 ******* 2026-02-08 03:38:53.341594 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341599 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341607 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341615 | orchestrator | 2026-02-08 03:38:53.341623 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-08 03:38:53.341632 | orchestrator | Sunday 08 February 2026 03:38:45 +0000 (0:00:00.328) 0:01:08.483 ******* 2026-02-08 03:38:53.341645 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341652 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341662 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341669 | orchestrator | 2026-02-08 03:38:53.341677 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-08 03:38:53.341685 | orchestrator | Sunday 08 February 2026 03:38:46 +0000 (0:00:00.299) 0:01:08.783 ******* 2026-02-08 03:38:53.341694 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341756 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341765 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341775 | orchestrator | 2026-02-08 03:38:53.341784 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-08 03:38:53.341793 | orchestrator | Sunday 08 February 2026 03:38:46 +0000 (0:00:00.534) 0:01:09.317 ******* 2026-02-08 03:38:53.341800 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341807 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341816 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341824 | orchestrator | 2026-02-08 03:38:53.341830 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-08 03:38:53.341836 | orchestrator | Sunday 08 February 2026 03:38:46 +0000 (0:00:00.312) 0:01:09.630 ******* 2026-02-08 03:38:53.341841 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341848 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341854 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341859 | orchestrator | 2026-02-08 03:38:53.341865 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-08 03:38:53.341871 | orchestrator | Sunday 08 February 2026 03:38:47 +0000 (0:00:00.345) 0:01:09.975 ******* 2026-02-08 03:38:53.341889 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.341895 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.341909 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.341915 | orchestrator | 2026-02-08 03:38:53.341921 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-08 03:38:53.341928 | orchestrator | Sunday 08 February 2026 03:38:47 +0000 (0:00:00.318) 0:01:10.294 ******* 2026-02-08 03:38:53.341934 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:38:53.341941 | orchestrator | 2026-02-08 03:38:53.341947 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-08 03:38:53.341953 | orchestrator | Sunday 08 February 2026 03:38:48 +0000 (0:00:00.803) 0:01:11.098 ******* 2026-02-08 03:38:53.341959 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:38:53.341967 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:38:53.341975 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:38:53.341981 | orchestrator | 2026-02-08 03:38:53.341990 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-08 03:38:53.342002 | orchestrator | Sunday 08 February 2026 03:38:48 +0000 (0:00:00.444) 0:01:11.543 ******* 2026-02-08 03:38:53.342007 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:38:53.342012 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:38:53.342056 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:38:53.342061 | orchestrator | 2026-02-08 03:38:53.342066 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-08 03:38:53.342076 | orchestrator | Sunday 08 February 2026 03:38:49 +0000 (0:00:00.466) 0:01:12.009 ******* 2026-02-08 03:38:53.342084 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.342092 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.342100 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.342108 | orchestrator | 2026-02-08 03:38:53.342117 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-08 03:38:53.342126 | orchestrator | Sunday 08 February 2026 03:38:49 +0000 (0:00:00.341) 0:01:12.351 ******* 2026-02-08 03:38:53.342135 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.342141 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.342146 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.342151 | orchestrator | 2026-02-08 03:38:53.342156 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-08 03:38:53.342161 | orchestrator | Sunday 08 February 2026 03:38:50 +0000 (0:00:00.605) 0:01:12.956 ******* 2026-02-08 03:38:53.342166 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.342171 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.342176 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.342181 | orchestrator | 2026-02-08 03:38:53.342186 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-08 03:38:53.342192 | orchestrator | Sunday 08 February 2026 03:38:50 +0000 (0:00:00.339) 0:01:13.296 ******* 2026-02-08 03:38:53.342196 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.342202 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.342206 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.342211 | orchestrator | 2026-02-08 03:38:53.342216 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-08 03:38:53.342221 | orchestrator | Sunday 08 February 2026 03:38:50 +0000 (0:00:00.335) 0:01:13.632 ******* 2026-02-08 03:38:53.342226 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.342232 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.342237 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.342242 | orchestrator | 2026-02-08 03:38:53.342247 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-08 03:38:53.342252 | orchestrator | Sunday 08 February 2026 03:38:51 +0000 (0:00:00.322) 0:01:13.954 ******* 2026-02-08 03:38:53.342257 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:38:53.342262 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:38:53.342267 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:38:53.342280 | orchestrator | 2026-02-08 03:38:53.342285 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-08 03:38:53.342290 | orchestrator | Sunday 08 February 2026 03:38:51 +0000 (0:00:00.577) 0:01:14.532 ******* 2026-02-08 03:38:53.342297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:53.342305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:53.342310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:53.342322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508590 | orchestrator | 2026-02-08 03:38:59.508610 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-08 03:38:59.508627 | orchestrator | Sunday 08 February 2026 03:38:53 +0000 (0:00:01.430) 0:01:15.962 ******* 2026-02-08 03:38:59.508644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508909 | orchestrator | 2026-02-08 03:38:59.508927 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-08 03:38:59.508943 | orchestrator | Sunday 08 February 2026 03:38:57 +0000 (0:00:03.733) 0:01:19.696 ******* 2026-02-08 03:38:59.508960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.508995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.509014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.509036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:38:59.509070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.378359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.378584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.378661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.378725 | orchestrator | 2026-02-08 03:39:18.378804 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-08 03:39:18.378827 | orchestrator | Sunday 08 February 2026 03:38:59 +0000 (0:00:01.990) 0:01:21.686 ******* 2026-02-08 03:39:18.378845 | orchestrator | 2026-02-08 03:39:18.378917 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-08 03:39:18.378937 | orchestrator | Sunday 08 February 2026 03:38:59 +0000 (0:00:00.072) 0:01:21.758 ******* 2026-02-08 03:39:18.378957 | orchestrator | 2026-02-08 03:39:18.378977 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-08 03:39:18.378995 | orchestrator | Sunday 08 February 2026 03:38:59 +0000 (0:00:00.293) 0:01:22.052 ******* 2026-02-08 03:39:18.379013 | orchestrator | 2026-02-08 03:39:18.379026 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-08 03:39:18.379039 | orchestrator | Sunday 08 February 2026 03:38:59 +0000 (0:00:00.071) 0:01:22.124 ******* 2026-02-08 03:39:18.379052 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:39:18.379068 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:39:18.379087 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:39:18.379106 | orchestrator | 2026-02-08 03:39:18.379125 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-08 03:39:18.379142 | orchestrator | Sunday 08 February 2026 03:39:02 +0000 (0:00:02.521) 0:01:24.645 ******* 2026-02-08 03:39:18.379160 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:39:18.379180 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:39:18.379200 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:39:18.379218 | orchestrator | 2026-02-08 03:39:18.379237 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-08 03:39:18.379249 | orchestrator | Sunday 08 February 2026 03:39:04 +0000 (0:00:02.427) 0:01:27.072 ******* 2026-02-08 03:39:18.379260 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:39:18.379270 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:39:18.379281 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:39:18.379291 | orchestrator | 2026-02-08 03:39:18.379302 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-08 03:39:18.379313 | orchestrator | Sunday 08 February 2026 03:39:11 +0000 (0:00:07.258) 0:01:34.331 ******* 2026-02-08 03:39:18.379323 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:39:18.379334 | orchestrator | 2026-02-08 03:39:18.379345 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-08 03:39:18.379355 | orchestrator | Sunday 08 February 2026 03:39:11 +0000 (0:00:00.118) 0:01:34.449 ******* 2026-02-08 03:39:18.379366 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:39:18.379378 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:39:18.379396 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:39:18.379413 | orchestrator | 2026-02-08 03:39:18.379432 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-08 03:39:18.379451 | orchestrator | Sunday 08 February 2026 03:39:12 +0000 (0:00:01.020) 0:01:35.469 ******* 2026-02-08 03:39:18.379470 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:39:18.379489 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:39:18.379507 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:39:18.379526 | orchestrator | 2026-02-08 03:39:18.379544 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-08 03:39:18.379563 | orchestrator | Sunday 08 February 2026 03:39:13 +0000 (0:00:00.584) 0:01:36.054 ******* 2026-02-08 03:39:18.379581 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:39:18.379600 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:39:18.379618 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:39:18.379637 | orchestrator | 2026-02-08 03:39:18.379657 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-08 03:39:18.379697 | orchestrator | Sunday 08 February 2026 03:39:14 +0000 (0:00:00.765) 0:01:36.820 ******* 2026-02-08 03:39:18.379710 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:39:18.379721 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:39:18.379858 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:39:18.379873 | orchestrator | 2026-02-08 03:39:18.379884 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-08 03:39:18.379895 | orchestrator | Sunday 08 February 2026 03:39:14 +0000 (0:00:00.585) 0:01:37.405 ******* 2026-02-08 03:39:18.379906 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:39:18.379916 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:39:18.379950 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:39:18.379962 | orchestrator | 2026-02-08 03:39:18.379973 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-08 03:39:18.379995 | orchestrator | Sunday 08 February 2026 03:39:15 +0000 (0:00:01.209) 0:01:38.615 ******* 2026-02-08 03:39:18.380006 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:39:18.380017 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:39:18.380066 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:39:18.380079 | orchestrator | 2026-02-08 03:39:18.380090 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-08 03:39:18.380101 | orchestrator | Sunday 08 February 2026 03:39:16 +0000 (0:00:00.704) 0:01:39.320 ******* 2026-02-08 03:39:18.380112 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:39:18.380123 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:39:18.380134 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:39:18.380145 | orchestrator | 2026-02-08 03:39:18.380157 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-08 03:39:18.380168 | orchestrator | Sunday 08 February 2026 03:39:17 +0000 (0:00:00.331) 0:01:39.651 ******* 2026-02-08 03:39:18.380181 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.380195 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.380207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.380218 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.380231 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.380242 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.380264 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.380275 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:18.380302 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289233 | orchestrator | 2026-02-08 03:39:25.289345 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-08 03:39:25.289356 | orchestrator | Sunday 08 February 2026 03:39:18 +0000 (0:00:01.342) 0:01:40.993 ******* 2026-02-08 03:39:25.289365 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289374 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289380 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289387 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289425 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289442 | orchestrator | 2026-02-08 03:39:25.289448 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-08 03:39:25.289453 | orchestrator | Sunday 08 February 2026 03:39:22 +0000 (0:00:03.700) 0:01:44.693 ******* 2026-02-08 03:39:25.289486 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289492 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289498 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289504 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289526 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 03:39:25.289542 | orchestrator | 2026-02-08 03:39:25.289548 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-08 03:39:25.289553 | orchestrator | Sunday 08 February 2026 03:39:25 +0000 (0:00:03.006) 0:01:47.700 ******* 2026-02-08 03:39:25.289559 | orchestrator | 2026-02-08 03:39:25.289575 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-08 03:39:25.289580 | orchestrator | Sunday 08 February 2026 03:39:25 +0000 (0:00:00.066) 0:01:47.766 ******* 2026-02-08 03:39:25.289586 | orchestrator | 2026-02-08 03:39:25.289591 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-08 03:39:25.289600 | orchestrator | Sunday 08 February 2026 03:39:25 +0000 (0:00:00.062) 0:01:47.829 ******* 2026-02-08 03:39:25.289606 | orchestrator | 2026-02-08 03:39:25.289616 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-08 03:39:49.561723 | orchestrator | Sunday 08 February 2026 03:39:25 +0000 (0:00:00.066) 0:01:47.896 ******* 2026-02-08 03:39:49.561899 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:39:49.561914 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:39:49.561920 | orchestrator | 2026-02-08 03:39:49.561927 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-08 03:39:49.561932 | orchestrator | Sunday 08 February 2026 03:39:31 +0000 (0:00:06.231) 0:01:54.127 ******* 2026-02-08 03:39:49.561938 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:39:49.561943 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:39:49.561949 | orchestrator | 2026-02-08 03:39:49.561954 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-08 03:39:49.561960 | orchestrator | Sunday 08 February 2026 03:39:37 +0000 (0:00:06.310) 0:02:00.438 ******* 2026-02-08 03:39:49.561965 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:39:49.561970 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:39:49.561975 | orchestrator | 2026-02-08 03:39:49.561981 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-08 03:39:49.563023 | orchestrator | Sunday 08 February 2026 03:39:43 +0000 (0:00:06.167) 0:02:06.606 ******* 2026-02-08 03:39:49.563120 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:39:49.563148 | orchestrator | 2026-02-08 03:39:49.563210 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-08 03:39:49.563230 | orchestrator | Sunday 08 February 2026 03:39:44 +0000 (0:00:00.175) 0:02:06.782 ******* 2026-02-08 03:39:49.563249 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:39:49.563268 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:39:49.563288 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:39:49.563305 | orchestrator | 2026-02-08 03:39:49.563323 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-08 03:39:49.563341 | orchestrator | Sunday 08 February 2026 03:39:45 +0000 (0:00:01.080) 0:02:07.863 ******* 2026-02-08 03:39:49.563361 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:39:49.563380 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:39:49.563400 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:39:49.563418 | orchestrator | 2026-02-08 03:39:49.563437 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-08 03:39:49.563456 | orchestrator | Sunday 08 February 2026 03:39:45 +0000 (0:00:00.583) 0:02:08.446 ******* 2026-02-08 03:39:49.563474 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:39:49.563494 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:39:49.563515 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:39:49.563537 | orchestrator | 2026-02-08 03:39:49.563558 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-08 03:39:49.563578 | orchestrator | Sunday 08 February 2026 03:39:46 +0000 (0:00:00.799) 0:02:09.245 ******* 2026-02-08 03:39:49.563598 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:39:49.563617 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:39:49.563636 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:39:49.563653 | orchestrator | 2026-02-08 03:39:49.563672 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-08 03:39:49.563691 | orchestrator | Sunday 08 February 2026 03:39:47 +0000 (0:00:00.619) 0:02:09.865 ******* 2026-02-08 03:39:49.563710 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:39:49.563730 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:39:49.563748 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:39:49.563961 | orchestrator | 2026-02-08 03:39:49.564010 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-08 03:39:49.564023 | orchestrator | Sunday 08 February 2026 03:39:48 +0000 (0:00:01.009) 0:02:10.874 ******* 2026-02-08 03:39:49.564048 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:39:49.564060 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:39:49.564071 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:39:49.564081 | orchestrator | 2026-02-08 03:39:49.564092 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:39:49.564105 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-08 03:39:49.564116 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-08 03:39:49.564127 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-08 03:39:49.564138 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:39:49.564150 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:39:49.564161 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:39:49.564172 | orchestrator | 2026-02-08 03:39:49.564183 | orchestrator | 2026-02-08 03:39:49.564194 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:39:49.564205 | orchestrator | Sunday 08 February 2026 03:39:49 +0000 (0:00:00.890) 0:02:11.764 ******* 2026-02-08 03:39:49.564232 | orchestrator | =============================================================================== 2026-02-08 03:39:49.564244 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.75s 2026-02-08 03:39:49.564255 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.57s 2026-02-08 03:39:49.564266 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.43s 2026-02-08 03:39:49.564277 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.75s 2026-02-08 03:39:49.564313 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.74s 2026-02-08 03:39:49.564353 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.73s 2026-02-08 03:39:49.564365 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.70s 2026-02-08 03:39:49.564376 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.01s 2026-02-08 03:39:49.564387 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.33s 2026-02-08 03:39:49.564398 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.99s 2026-02-08 03:39:49.564409 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.53s 2026-02-08 03:39:49.564420 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.52s 2026-02-08 03:39:49.564431 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-02-08 03:39:49.564442 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.42s 2026-02-08 03:39:49.564453 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.34s 2026-02-08 03:39:49.564464 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.34s 2026-02-08 03:39:49.564475 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.22s 2026-02-08 03:39:49.564485 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.21s 2026-02-08 03:39:49.564496 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.18s 2026-02-08 03:39:49.564508 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.15s 2026-02-08 03:39:49.938406 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-08 03:39:49.938492 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-08 03:39:52.208726 | orchestrator | 2026-02-08 03:39:52 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-08 03:40:02.294362 | orchestrator | 2026-02-08 03:40:02 | INFO  | Task cbc3e79b-5407-42c1-a2a1-4c419917c91d (wipe-partitions) was prepared for execution. 2026-02-08 03:40:02.294491 | orchestrator | 2026-02-08 03:40:02 | INFO  | It takes a moment until task cbc3e79b-5407-42c1-a2a1-4c419917c91d (wipe-partitions) has been started and output is visible here. 2026-02-08 03:40:16.304736 | orchestrator | 2026-02-08 03:40:16.304884 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-08 03:40:16.304899 | orchestrator | 2026-02-08 03:40:16.304908 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-08 03:40:16.304917 | orchestrator | Sunday 08 February 2026 03:40:06 +0000 (0:00:00.136) 0:00:00.136 ******* 2026-02-08 03:40:16.304925 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:40:16.304935 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:40:16.304943 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:40:16.304951 | orchestrator | 2026-02-08 03:40:16.304959 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-08 03:40:16.304967 | orchestrator | Sunday 08 February 2026 03:40:07 +0000 (0:00:00.597) 0:00:00.734 ******* 2026-02-08 03:40:16.304975 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:16.304983 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:40:16.304991 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:40:16.305023 | orchestrator | 2026-02-08 03:40:16.305037 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-08 03:40:16.305050 | orchestrator | Sunday 08 February 2026 03:40:07 +0000 (0:00:00.417) 0:00:01.152 ******* 2026-02-08 03:40:16.305062 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:40:16.305076 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:40:16.305088 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:40:16.305100 | orchestrator | 2026-02-08 03:40:16.305112 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-08 03:40:16.305125 | orchestrator | Sunday 08 February 2026 03:40:08 +0000 (0:00:00.622) 0:00:01.774 ******* 2026-02-08 03:40:16.305137 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:16.305150 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:40:16.305163 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:40:16.305174 | orchestrator | 2026-02-08 03:40:16.305186 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-08 03:40:16.305199 | orchestrator | Sunday 08 February 2026 03:40:08 +0000 (0:00:00.286) 0:00:02.060 ******* 2026-02-08 03:40:16.305212 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-08 03:40:16.305226 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-08 03:40:16.305239 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-08 03:40:16.305252 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-08 03:40:16.305266 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-08 03:40:16.305279 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-08 03:40:16.305293 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-08 03:40:16.305306 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-08 03:40:16.305320 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-08 03:40:16.305333 | orchestrator | 2026-02-08 03:40:16.305348 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-08 03:40:16.305361 | orchestrator | Sunday 08 February 2026 03:40:10 +0000 (0:00:02.148) 0:00:04.208 ******* 2026-02-08 03:40:16.305375 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-08 03:40:16.305389 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-08 03:40:16.305404 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-08 03:40:16.305418 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-08 03:40:16.305432 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-08 03:40:16.305466 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-08 03:40:16.305481 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-08 03:40:16.305495 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-08 03:40:16.305508 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-08 03:40:16.305522 | orchestrator | 2026-02-08 03:40:16.305538 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-08 03:40:16.305553 | orchestrator | Sunday 08 February 2026 03:40:12 +0000 (0:00:01.618) 0:00:05.827 ******* 2026-02-08 03:40:16.305569 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-08 03:40:16.305584 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-08 03:40:16.305597 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-08 03:40:16.305610 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-08 03:40:16.305624 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-08 03:40:16.305637 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-08 03:40:16.305651 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-08 03:40:16.305665 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-08 03:40:16.305679 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-08 03:40:16.305694 | orchestrator | 2026-02-08 03:40:16.305707 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-08 03:40:16.305721 | orchestrator | Sunday 08 February 2026 03:40:14 +0000 (0:00:02.091) 0:00:07.918 ******* 2026-02-08 03:40:16.305751 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:40:16.305766 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:40:16.305779 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:40:16.305793 | orchestrator | 2026-02-08 03:40:16.305872 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-08 03:40:16.305888 | orchestrator | Sunday 08 February 2026 03:40:15 +0000 (0:00:00.654) 0:00:08.573 ******* 2026-02-08 03:40:16.305902 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:40:16.305915 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:40:16.305929 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:40:16.305944 | orchestrator | 2026-02-08 03:40:16.305958 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:40:16.305973 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:16.305988 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:16.306088 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:16.306109 | orchestrator | 2026-02-08 03:40:16.306123 | orchestrator | 2026-02-08 03:40:16.306137 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:40:16.306151 | orchestrator | Sunday 08 February 2026 03:40:15 +0000 (0:00:00.661) 0:00:09.234 ******* 2026-02-08 03:40:16.306166 | orchestrator | =============================================================================== 2026-02-08 03:40:16.306180 | orchestrator | Check device availability ----------------------------------------------- 2.15s 2026-02-08 03:40:16.306194 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.09s 2026-02-08 03:40:16.306207 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.62s 2026-02-08 03:40:16.306222 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2026-02-08 03:40:16.306236 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2026-02-08 03:40:16.306250 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.62s 2026-02-08 03:40:16.306263 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2026-02-08 03:40:16.306278 | orchestrator | Remove all rook related logical devices --------------------------------- 0.42s 2026-02-08 03:40:16.306292 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2026-02-08 03:40:28.944710 | orchestrator | 2026-02-08 03:40:28 | INFO  | Task 72799b4b-c457-4bc7-8175-30467149796c (facts) was prepared for execution. 2026-02-08 03:40:28.944811 | orchestrator | 2026-02-08 03:40:28 | INFO  | It takes a moment until task 72799b4b-c457-4bc7-8175-30467149796c (facts) has been started and output is visible here. 2026-02-08 03:40:43.144058 | orchestrator | 2026-02-08 03:40:43.144174 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-08 03:40:43.144191 | orchestrator | 2026-02-08 03:40:43.144203 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-08 03:40:43.144214 | orchestrator | Sunday 08 February 2026 03:40:33 +0000 (0:00:00.295) 0:00:00.295 ******* 2026-02-08 03:40:43.144225 | orchestrator | ok: [testbed-manager] 2026-02-08 03:40:43.144238 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:40:43.144248 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:40:43.144259 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:40:43.144270 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:40:43.144280 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:40:43.144290 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:40:43.144301 | orchestrator | 2026-02-08 03:40:43.144312 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-08 03:40:43.144350 | orchestrator | Sunday 08 February 2026 03:40:34 +0000 (0:00:01.116) 0:00:01.412 ******* 2026-02-08 03:40:43.144362 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:40:43.144374 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:40:43.144385 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:40:43.144396 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:40:43.144407 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:43.144418 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:40:43.144429 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:40:43.144440 | orchestrator | 2026-02-08 03:40:43.144450 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-08 03:40:43.144461 | orchestrator | 2026-02-08 03:40:43.144472 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-08 03:40:43.144483 | orchestrator | Sunday 08 February 2026 03:40:35 +0000 (0:00:01.322) 0:00:02.735 ******* 2026-02-08 03:40:43.144493 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:40:43.144504 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:40:43.144514 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:40:43.144525 | orchestrator | ok: [testbed-manager] 2026-02-08 03:40:43.144535 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:40:43.144546 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:40:43.144556 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:40:43.144567 | orchestrator | 2026-02-08 03:40:43.144577 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-08 03:40:43.144588 | orchestrator | 2026-02-08 03:40:43.144601 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-08 03:40:43.144620 | orchestrator | Sunday 08 February 2026 03:40:41 +0000 (0:00:06.036) 0:00:08.771 ******* 2026-02-08 03:40:43.144639 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:40:43.144658 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:40:43.144676 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:40:43.144695 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:40:43.144715 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:43.144733 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:40:43.144750 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:40:43.144763 | orchestrator | 2026-02-08 03:40:43.144776 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:40:43.144789 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:43.144803 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:43.144815 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:43.144827 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:43.144870 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:43.144884 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:43.144895 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:40:43.144908 | orchestrator | 2026-02-08 03:40:43.144920 | orchestrator | 2026-02-08 03:40:43.144932 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:40:43.144945 | orchestrator | Sunday 08 February 2026 03:40:42 +0000 (0:00:00.653) 0:00:09.425 ******* 2026-02-08 03:40:43.144958 | orchestrator | =============================================================================== 2026-02-08 03:40:43.144970 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.04s 2026-02-08 03:40:43.144991 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.32s 2026-02-08 03:40:43.145002 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-02-08 03:40:43.145012 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2026-02-08 03:40:45.705199 | orchestrator | 2026-02-08 03:40:45 | INFO  | Task da2a1e63-14b4-43ab-b645-83ae5e181e10 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-08 03:40:45.705330 | orchestrator | 2026-02-08 03:40:45 | INFO  | It takes a moment until task da2a1e63-14b4-43ab-b645-83ae5e181e10 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-08 03:40:59.061236 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-08 03:40:59.061333 | orchestrator | 2.16.14 2026-02-08 03:40:59.061347 | orchestrator | 2026-02-08 03:40:59.061358 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-08 03:40:59.061369 | orchestrator | 2026-02-08 03:40:59.061380 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-08 03:40:59.061390 | orchestrator | Sunday 08 February 2026 03:40:50 +0000 (0:00:00.364) 0:00:00.364 ******* 2026-02-08 03:40:59.061401 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 03:40:59.061411 | orchestrator | 2026-02-08 03:40:59.061421 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-08 03:40:59.061472 | orchestrator | Sunday 08 February 2026 03:40:51 +0000 (0:00:00.317) 0:00:00.682 ******* 2026-02-08 03:40:59.061483 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:40:59.061493 | orchestrator | 2026-02-08 03:40:59.061503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.061513 | orchestrator | Sunday 08 February 2026 03:40:51 +0000 (0:00:00.328) 0:00:01.010 ******* 2026-02-08 03:40:59.061522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-08 03:40:59.061532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-08 03:40:59.061547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-08 03:40:59.061556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-08 03:40:59.061566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-08 03:40:59.061576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-08 03:40:59.061585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-08 03:40:59.061594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-08 03:40:59.061604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-08 03:40:59.061613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-08 03:40:59.061623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-08 03:40:59.061632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-08 03:40:59.061642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-08 03:40:59.061651 | orchestrator | 2026-02-08 03:40:59.061661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.061670 | orchestrator | Sunday 08 February 2026 03:40:51 +0000 (0:00:00.614) 0:00:01.624 ******* 2026-02-08 03:40:59.061680 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.061690 | orchestrator | 2026-02-08 03:40:59.061700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.061709 | orchestrator | Sunday 08 February 2026 03:40:52 +0000 (0:00:00.197) 0:00:01.822 ******* 2026-02-08 03:40:59.061741 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.061752 | orchestrator | 2026-02-08 03:40:59.061761 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.061771 | orchestrator | Sunday 08 February 2026 03:40:52 +0000 (0:00:00.264) 0:00:02.086 ******* 2026-02-08 03:40:59.061780 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.061792 | orchestrator | 2026-02-08 03:40:59.061803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.061815 | orchestrator | Sunday 08 February 2026 03:40:52 +0000 (0:00:00.210) 0:00:02.297 ******* 2026-02-08 03:40:59.061826 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.061837 | orchestrator | 2026-02-08 03:40:59.061881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.061894 | orchestrator | Sunday 08 February 2026 03:40:52 +0000 (0:00:00.209) 0:00:02.507 ******* 2026-02-08 03:40:59.061904 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.061916 | orchestrator | 2026-02-08 03:40:59.061927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.061938 | orchestrator | Sunday 08 February 2026 03:40:53 +0000 (0:00:00.240) 0:00:02.748 ******* 2026-02-08 03:40:59.061950 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.061961 | orchestrator | 2026-02-08 03:40:59.061973 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.061984 | orchestrator | Sunday 08 February 2026 03:40:53 +0000 (0:00:00.270) 0:00:03.018 ******* 2026-02-08 03:40:59.061995 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.062006 | orchestrator | 2026-02-08 03:40:59.062091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.062105 | orchestrator | Sunday 08 February 2026 03:40:53 +0000 (0:00:00.241) 0:00:03.260 ******* 2026-02-08 03:40:59.062117 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.062128 | orchestrator | 2026-02-08 03:40:59.062140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.062152 | orchestrator | Sunday 08 February 2026 03:40:53 +0000 (0:00:00.203) 0:00:03.463 ******* 2026-02-08 03:40:59.062164 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f) 2026-02-08 03:40:59.062175 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f) 2026-02-08 03:40:59.062185 | orchestrator | 2026-02-08 03:40:59.062194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.062222 | orchestrator | Sunday 08 February 2026 03:40:54 +0000 (0:00:00.459) 0:00:03.923 ******* 2026-02-08 03:40:59.062232 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1) 2026-02-08 03:40:59.062242 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1) 2026-02-08 03:40:59.062252 | orchestrator | 2026-02-08 03:40:59.062261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.062271 | orchestrator | Sunday 08 February 2026 03:40:54 +0000 (0:00:00.671) 0:00:04.594 ******* 2026-02-08 03:40:59.062280 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e) 2026-02-08 03:40:59.062290 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e) 2026-02-08 03:40:59.062300 | orchestrator | 2026-02-08 03:40:59.062309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.062318 | orchestrator | Sunday 08 February 2026 03:40:55 +0000 (0:00:00.689) 0:00:05.283 ******* 2026-02-08 03:40:59.062328 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055) 2026-02-08 03:40:59.062338 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055) 2026-02-08 03:40:59.062357 | orchestrator | 2026-02-08 03:40:59.062379 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:40:59.062397 | orchestrator | Sunday 08 February 2026 03:40:56 +0000 (0:00:00.958) 0:00:06.242 ******* 2026-02-08 03:40:59.062415 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-08 03:40:59.062433 | orchestrator | 2026-02-08 03:40:59.062449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:40:59.062460 | orchestrator | Sunday 08 February 2026 03:40:56 +0000 (0:00:00.377) 0:00:06.620 ******* 2026-02-08 03:40:59.062469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-08 03:40:59.062479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-08 03:40:59.062488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-08 03:40:59.062498 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-08 03:40:59.062507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-08 03:40:59.062517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-08 03:40:59.062526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-08 03:40:59.062536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-08 03:40:59.062545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-08 03:40:59.062555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-08 03:40:59.062564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-08 03:40:59.062573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-08 03:40:59.062583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-08 03:40:59.062592 | orchestrator | 2026-02-08 03:40:59.062602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:40:59.062612 | orchestrator | Sunday 08 February 2026 03:40:57 +0000 (0:00:00.398) 0:00:07.018 ******* 2026-02-08 03:40:59.062621 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.062631 | orchestrator | 2026-02-08 03:40:59.062640 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:40:59.062650 | orchestrator | Sunday 08 February 2026 03:40:57 +0000 (0:00:00.213) 0:00:07.231 ******* 2026-02-08 03:40:59.062660 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.062669 | orchestrator | 2026-02-08 03:40:59.062679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:40:59.062688 | orchestrator | Sunday 08 February 2026 03:40:57 +0000 (0:00:00.249) 0:00:07.481 ******* 2026-02-08 03:40:59.062698 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.062723 | orchestrator | 2026-02-08 03:40:59.062742 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:40:59.062751 | orchestrator | Sunday 08 February 2026 03:40:58 +0000 (0:00:00.237) 0:00:07.718 ******* 2026-02-08 03:40:59.062761 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.062770 | orchestrator | 2026-02-08 03:40:59.062780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:40:59.062790 | orchestrator | Sunday 08 February 2026 03:40:58 +0000 (0:00:00.230) 0:00:07.949 ******* 2026-02-08 03:40:59.062799 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.062809 | orchestrator | 2026-02-08 03:40:59.062819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:40:59.062829 | orchestrator | Sunday 08 February 2026 03:40:58 +0000 (0:00:00.240) 0:00:08.189 ******* 2026-02-08 03:40:59.062838 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.062878 | orchestrator | 2026-02-08 03:40:59.062889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:40:59.062899 | orchestrator | Sunday 08 February 2026 03:40:58 +0000 (0:00:00.272) 0:00:08.461 ******* 2026-02-08 03:40:59.062908 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:40:59.062918 | orchestrator | 2026-02-08 03:40:59.062934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:07.372167 | orchestrator | Sunday 08 February 2026 03:40:59 +0000 (0:00:00.239) 0:00:08.701 ******* 2026-02-08 03:41:07.372277 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.372297 | orchestrator | 2026-02-08 03:41:07.372313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:07.372328 | orchestrator | Sunday 08 February 2026 03:40:59 +0000 (0:00:00.218) 0:00:08.919 ******* 2026-02-08 03:41:07.372343 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-08 03:41:07.372361 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-08 03:41:07.372377 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-08 03:41:07.372393 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-08 03:41:07.372408 | orchestrator | 2026-02-08 03:41:07.372424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:07.372439 | orchestrator | Sunday 08 February 2026 03:41:00 +0000 (0:00:01.296) 0:00:10.216 ******* 2026-02-08 03:41:07.372457 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.372474 | orchestrator | 2026-02-08 03:41:07.372490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:07.372506 | orchestrator | Sunday 08 February 2026 03:41:00 +0000 (0:00:00.224) 0:00:10.440 ******* 2026-02-08 03:41:07.372519 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.372533 | orchestrator | 2026-02-08 03:41:07.372546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:07.372579 | orchestrator | Sunday 08 February 2026 03:41:01 +0000 (0:00:00.219) 0:00:10.660 ******* 2026-02-08 03:41:07.372594 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.372609 | orchestrator | 2026-02-08 03:41:07.372623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:07.372638 | orchestrator | Sunday 08 February 2026 03:41:01 +0000 (0:00:00.223) 0:00:10.884 ******* 2026-02-08 03:41:07.372652 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.372665 | orchestrator | 2026-02-08 03:41:07.372679 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-08 03:41:07.372694 | orchestrator | Sunday 08 February 2026 03:41:01 +0000 (0:00:00.250) 0:00:11.134 ******* 2026-02-08 03:41:07.372710 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-08 03:41:07.372726 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-08 03:41:07.372742 | orchestrator | 2026-02-08 03:41:07.372757 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-08 03:41:07.372772 | orchestrator | Sunday 08 February 2026 03:41:01 +0000 (0:00:00.217) 0:00:11.352 ******* 2026-02-08 03:41:07.372787 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.372803 | orchestrator | 2026-02-08 03:41:07.372818 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-08 03:41:07.372833 | orchestrator | Sunday 08 February 2026 03:41:01 +0000 (0:00:00.142) 0:00:11.494 ******* 2026-02-08 03:41:07.372849 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.372892 | orchestrator | 2026-02-08 03:41:07.372908 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-08 03:41:07.372925 | orchestrator | Sunday 08 February 2026 03:41:01 +0000 (0:00:00.154) 0:00:11.648 ******* 2026-02-08 03:41:07.372939 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.372955 | orchestrator | 2026-02-08 03:41:07.372969 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-08 03:41:07.372985 | orchestrator | Sunday 08 February 2026 03:41:02 +0000 (0:00:00.159) 0:00:11.807 ******* 2026-02-08 03:41:07.373029 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:41:07.373045 | orchestrator | 2026-02-08 03:41:07.373061 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-08 03:41:07.373076 | orchestrator | Sunday 08 February 2026 03:41:02 +0000 (0:00:00.151) 0:00:11.959 ******* 2026-02-08 03:41:07.373094 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '658e9559-2696-538a-a0a4-811fe95d0be4'}}) 2026-02-08 03:41:07.373109 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edf9913e-48af-595a-836b-515c584cb757'}}) 2026-02-08 03:41:07.373123 | orchestrator | 2026-02-08 03:41:07.373138 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-08 03:41:07.373153 | orchestrator | Sunday 08 February 2026 03:41:02 +0000 (0:00:00.175) 0:00:12.134 ******* 2026-02-08 03:41:07.373168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '658e9559-2696-538a-a0a4-811fe95d0be4'}})  2026-02-08 03:41:07.373186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edf9913e-48af-595a-836b-515c584cb757'}})  2026-02-08 03:41:07.373201 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.373215 | orchestrator | 2026-02-08 03:41:07.373230 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-08 03:41:07.373245 | orchestrator | Sunday 08 February 2026 03:41:02 +0000 (0:00:00.377) 0:00:12.511 ******* 2026-02-08 03:41:07.373260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '658e9559-2696-538a-a0a4-811fe95d0be4'}})  2026-02-08 03:41:07.373276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edf9913e-48af-595a-836b-515c584cb757'}})  2026-02-08 03:41:07.373290 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.373305 | orchestrator | 2026-02-08 03:41:07.373319 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-08 03:41:07.373335 | orchestrator | Sunday 08 February 2026 03:41:03 +0000 (0:00:00.164) 0:00:12.675 ******* 2026-02-08 03:41:07.373350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '658e9559-2696-538a-a0a4-811fe95d0be4'}})  2026-02-08 03:41:07.373383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edf9913e-48af-595a-836b-515c584cb757'}})  2026-02-08 03:41:07.373400 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.373414 | orchestrator | 2026-02-08 03:41:07.373428 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-08 03:41:07.373437 | orchestrator | Sunday 08 February 2026 03:41:03 +0000 (0:00:00.165) 0:00:12.840 ******* 2026-02-08 03:41:07.373446 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:41:07.373454 | orchestrator | 2026-02-08 03:41:07.373464 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-08 03:41:07.373472 | orchestrator | Sunday 08 February 2026 03:41:03 +0000 (0:00:00.154) 0:00:12.994 ******* 2026-02-08 03:41:07.373481 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:41:07.373489 | orchestrator | 2026-02-08 03:41:07.373498 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-08 03:41:07.373506 | orchestrator | Sunday 08 February 2026 03:41:03 +0000 (0:00:00.149) 0:00:13.144 ******* 2026-02-08 03:41:07.373515 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.373524 | orchestrator | 2026-02-08 03:41:07.373532 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-08 03:41:07.373541 | orchestrator | Sunday 08 February 2026 03:41:03 +0000 (0:00:00.147) 0:00:13.291 ******* 2026-02-08 03:41:07.373550 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.373558 | orchestrator | 2026-02-08 03:41:07.373567 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-08 03:41:07.373583 | orchestrator | Sunday 08 February 2026 03:41:03 +0000 (0:00:00.148) 0:00:13.440 ******* 2026-02-08 03:41:07.373592 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.373609 | orchestrator | 2026-02-08 03:41:07.373618 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-08 03:41:07.373626 | orchestrator | Sunday 08 February 2026 03:41:03 +0000 (0:00:00.143) 0:00:13.583 ******* 2026-02-08 03:41:07.373635 | orchestrator | ok: [testbed-node-3] => { 2026-02-08 03:41:07.373644 | orchestrator |  "ceph_osd_devices": { 2026-02-08 03:41:07.373652 | orchestrator |  "sdb": { 2026-02-08 03:41:07.373661 | orchestrator |  "osd_lvm_uuid": "658e9559-2696-538a-a0a4-811fe95d0be4" 2026-02-08 03:41:07.373670 | orchestrator |  }, 2026-02-08 03:41:07.373679 | orchestrator |  "sdc": { 2026-02-08 03:41:07.373687 | orchestrator |  "osd_lvm_uuid": "edf9913e-48af-595a-836b-515c584cb757" 2026-02-08 03:41:07.373696 | orchestrator |  } 2026-02-08 03:41:07.373704 | orchestrator |  } 2026-02-08 03:41:07.373713 | orchestrator | } 2026-02-08 03:41:07.373722 | orchestrator | 2026-02-08 03:41:07.373731 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-08 03:41:07.373739 | orchestrator | Sunday 08 February 2026 03:41:04 +0000 (0:00:00.151) 0:00:13.734 ******* 2026-02-08 03:41:07.373748 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.373756 | orchestrator | 2026-02-08 03:41:07.373765 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-08 03:41:07.373773 | orchestrator | Sunday 08 February 2026 03:41:04 +0000 (0:00:00.122) 0:00:13.857 ******* 2026-02-08 03:41:07.373782 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.373791 | orchestrator | 2026-02-08 03:41:07.373799 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-08 03:41:07.373808 | orchestrator | Sunday 08 February 2026 03:41:04 +0000 (0:00:00.143) 0:00:14.000 ******* 2026-02-08 03:41:07.373816 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:41:07.373825 | orchestrator | 2026-02-08 03:41:07.373833 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-08 03:41:07.373842 | orchestrator | Sunday 08 February 2026 03:41:04 +0000 (0:00:00.141) 0:00:14.141 ******* 2026-02-08 03:41:07.373850 | orchestrator | changed: [testbed-node-3] => { 2026-02-08 03:41:07.373897 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-08 03:41:07.373907 | orchestrator |  "ceph_osd_devices": { 2026-02-08 03:41:07.373916 | orchestrator |  "sdb": { 2026-02-08 03:41:07.373925 | orchestrator |  "osd_lvm_uuid": "658e9559-2696-538a-a0a4-811fe95d0be4" 2026-02-08 03:41:07.373934 | orchestrator |  }, 2026-02-08 03:41:07.373942 | orchestrator |  "sdc": { 2026-02-08 03:41:07.373951 | orchestrator |  "osd_lvm_uuid": "edf9913e-48af-595a-836b-515c584cb757" 2026-02-08 03:41:07.373959 | orchestrator |  } 2026-02-08 03:41:07.373968 | orchestrator |  }, 2026-02-08 03:41:07.373977 | orchestrator |  "lvm_volumes": [ 2026-02-08 03:41:07.373985 | orchestrator |  { 2026-02-08 03:41:07.373994 | orchestrator |  "data": "osd-block-658e9559-2696-538a-a0a4-811fe95d0be4", 2026-02-08 03:41:07.374003 | orchestrator |  "data_vg": "ceph-658e9559-2696-538a-a0a4-811fe95d0be4" 2026-02-08 03:41:07.374011 | orchestrator |  }, 2026-02-08 03:41:07.374062 | orchestrator |  { 2026-02-08 03:41:07.374071 | orchestrator |  "data": "osd-block-edf9913e-48af-595a-836b-515c584cb757", 2026-02-08 03:41:07.374080 | orchestrator |  "data_vg": "ceph-edf9913e-48af-595a-836b-515c584cb757" 2026-02-08 03:41:07.374088 | orchestrator |  } 2026-02-08 03:41:07.374097 | orchestrator |  ] 2026-02-08 03:41:07.374106 | orchestrator |  } 2026-02-08 03:41:07.374114 | orchestrator | } 2026-02-08 03:41:07.374124 | orchestrator | 2026-02-08 03:41:07.374132 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-08 03:41:07.374141 | orchestrator | Sunday 08 February 2026 03:41:04 +0000 (0:00:00.453) 0:00:14.595 ******* 2026-02-08 03:41:07.374150 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 03:41:07.374166 | orchestrator | 2026-02-08 03:41:07.374179 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-08 03:41:07.374187 | orchestrator | 2026-02-08 03:41:07.374196 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-08 03:41:07.374205 | orchestrator | Sunday 08 February 2026 03:41:06 +0000 (0:00:01.878) 0:00:16.473 ******* 2026-02-08 03:41:07.374213 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-08 03:41:07.374222 | orchestrator | 2026-02-08 03:41:07.374231 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-08 03:41:07.374240 | orchestrator | Sunday 08 February 2026 03:41:07 +0000 (0:00:00.282) 0:00:16.755 ******* 2026-02-08 03:41:07.374250 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:41:07.374259 | orchestrator | 2026-02-08 03:41:07.374275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.983377 | orchestrator | Sunday 08 February 2026 03:41:07 +0000 (0:00:00.260) 0:00:17.016 ******* 2026-02-08 03:41:15.983504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-08 03:41:15.983522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-08 03:41:15.983534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-08 03:41:15.983545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-08 03:41:15.983556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-08 03:41:15.983567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-08 03:41:15.983578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-08 03:41:15.983589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-08 03:41:15.983602 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-08 03:41:15.983649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-08 03:41:15.983674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-08 03:41:15.983692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-08 03:41:15.983708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-08 03:41:15.983727 | orchestrator | 2026-02-08 03:41:15.983746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.983765 | orchestrator | Sunday 08 February 2026 03:41:07 +0000 (0:00:00.416) 0:00:17.433 ******* 2026-02-08 03:41:15.983785 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.983804 | orchestrator | 2026-02-08 03:41:15.983824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.983842 | orchestrator | Sunday 08 February 2026 03:41:07 +0000 (0:00:00.217) 0:00:17.651 ******* 2026-02-08 03:41:15.983861 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.983916 | orchestrator | 2026-02-08 03:41:15.983935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.983953 | orchestrator | Sunday 08 February 2026 03:41:08 +0000 (0:00:00.218) 0:00:17.869 ******* 2026-02-08 03:41:15.983973 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.983992 | orchestrator | 2026-02-08 03:41:15.984011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984032 | orchestrator | Sunday 08 February 2026 03:41:08 +0000 (0:00:00.204) 0:00:18.074 ******* 2026-02-08 03:41:15.984052 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.984071 | orchestrator | 2026-02-08 03:41:15.984089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984101 | orchestrator | Sunday 08 February 2026 03:41:09 +0000 (0:00:00.687) 0:00:18.761 ******* 2026-02-08 03:41:15.984142 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.984155 | orchestrator | 2026-02-08 03:41:15.984168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984180 | orchestrator | Sunday 08 February 2026 03:41:09 +0000 (0:00:00.215) 0:00:18.977 ******* 2026-02-08 03:41:15.984192 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.984205 | orchestrator | 2026-02-08 03:41:15.984218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984230 | orchestrator | Sunday 08 February 2026 03:41:09 +0000 (0:00:00.208) 0:00:19.185 ******* 2026-02-08 03:41:15.984243 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.984256 | orchestrator | 2026-02-08 03:41:15.984266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984277 | orchestrator | Sunday 08 February 2026 03:41:09 +0000 (0:00:00.194) 0:00:19.380 ******* 2026-02-08 03:41:15.984288 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.984298 | orchestrator | 2026-02-08 03:41:15.984309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984320 | orchestrator | Sunday 08 February 2026 03:41:09 +0000 (0:00:00.223) 0:00:19.603 ******* 2026-02-08 03:41:15.984331 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8) 2026-02-08 03:41:15.984343 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8) 2026-02-08 03:41:15.984353 | orchestrator | 2026-02-08 03:41:15.984364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984375 | orchestrator | Sunday 08 February 2026 03:41:10 +0000 (0:00:00.442) 0:00:20.046 ******* 2026-02-08 03:41:15.984386 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2) 2026-02-08 03:41:15.984397 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2) 2026-02-08 03:41:15.984408 | orchestrator | 2026-02-08 03:41:15.984419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984430 | orchestrator | Sunday 08 February 2026 03:41:10 +0000 (0:00:00.460) 0:00:20.507 ******* 2026-02-08 03:41:15.984440 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea) 2026-02-08 03:41:15.984451 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea) 2026-02-08 03:41:15.984462 | orchestrator | 2026-02-08 03:41:15.984473 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984504 | orchestrator | Sunday 08 February 2026 03:41:11 +0000 (0:00:00.443) 0:00:20.950 ******* 2026-02-08 03:41:15.984515 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133) 2026-02-08 03:41:15.984526 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133) 2026-02-08 03:41:15.984537 | orchestrator | 2026-02-08 03:41:15.984548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:15.984559 | orchestrator | Sunday 08 February 2026 03:41:11 +0000 (0:00:00.462) 0:00:21.412 ******* 2026-02-08 03:41:15.984569 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-08 03:41:15.984580 | orchestrator | 2026-02-08 03:41:15.984591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.984601 | orchestrator | Sunday 08 February 2026 03:41:12 +0000 (0:00:00.334) 0:00:21.746 ******* 2026-02-08 03:41:15.984612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-08 03:41:15.984623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-08 03:41:15.984643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-08 03:41:15.984662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-08 03:41:15.984673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-08 03:41:15.984683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-08 03:41:15.984694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-08 03:41:15.984704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-08 03:41:15.984715 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-08 03:41:15.984725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-08 03:41:15.984736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-08 03:41:15.984747 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-08 03:41:15.984757 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-08 03:41:15.984768 | orchestrator | 2026-02-08 03:41:15.984780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.984790 | orchestrator | Sunday 08 February 2026 03:41:12 +0000 (0:00:00.399) 0:00:22.146 ******* 2026-02-08 03:41:15.984801 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.984812 | orchestrator | 2026-02-08 03:41:15.984823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.984834 | orchestrator | Sunday 08 February 2026 03:41:13 +0000 (0:00:00.704) 0:00:22.851 ******* 2026-02-08 03:41:15.984844 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.984855 | orchestrator | 2026-02-08 03:41:15.984900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.984912 | orchestrator | Sunday 08 February 2026 03:41:13 +0000 (0:00:00.229) 0:00:23.081 ******* 2026-02-08 03:41:15.984923 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.984934 | orchestrator | 2026-02-08 03:41:15.984944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.984955 | orchestrator | Sunday 08 February 2026 03:41:13 +0000 (0:00:00.211) 0:00:23.292 ******* 2026-02-08 03:41:15.984966 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.984976 | orchestrator | 2026-02-08 03:41:15.984987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.984998 | orchestrator | Sunday 08 February 2026 03:41:13 +0000 (0:00:00.225) 0:00:23.518 ******* 2026-02-08 03:41:15.985008 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.985019 | orchestrator | 2026-02-08 03:41:15.985030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.985040 | orchestrator | Sunday 08 February 2026 03:41:14 +0000 (0:00:00.235) 0:00:23.753 ******* 2026-02-08 03:41:15.985051 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.985062 | orchestrator | 2026-02-08 03:41:15.985072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.985083 | orchestrator | Sunday 08 February 2026 03:41:14 +0000 (0:00:00.247) 0:00:24.001 ******* 2026-02-08 03:41:15.985094 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.985104 | orchestrator | 2026-02-08 03:41:15.985115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.985126 | orchestrator | Sunday 08 February 2026 03:41:14 +0000 (0:00:00.227) 0:00:24.229 ******* 2026-02-08 03:41:15.985136 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:15.985147 | orchestrator | 2026-02-08 03:41:15.985158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.985168 | orchestrator | Sunday 08 February 2026 03:41:14 +0000 (0:00:00.221) 0:00:24.450 ******* 2026-02-08 03:41:15.985179 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-08 03:41:15.985199 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-08 03:41:15.985210 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-08 03:41:15.985221 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-08 03:41:15.985231 | orchestrator | 2026-02-08 03:41:15.985242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:15.985253 | orchestrator | Sunday 08 February 2026 03:41:15 +0000 (0:00:00.952) 0:00:25.403 ******* 2026-02-08 03:41:15.985264 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875107 | orchestrator | 2026-02-08 03:41:22.875188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:22.875198 | orchestrator | Sunday 08 February 2026 03:41:15 +0000 (0:00:00.225) 0:00:25.628 ******* 2026-02-08 03:41:22.875204 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875210 | orchestrator | 2026-02-08 03:41:22.875216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:22.875221 | orchestrator | Sunday 08 February 2026 03:41:16 +0000 (0:00:00.231) 0:00:25.859 ******* 2026-02-08 03:41:22.875226 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875231 | orchestrator | 2026-02-08 03:41:22.875237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:22.875242 | orchestrator | Sunday 08 February 2026 03:41:16 +0000 (0:00:00.768) 0:00:26.628 ******* 2026-02-08 03:41:22.875247 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875252 | orchestrator | 2026-02-08 03:41:22.875257 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-08 03:41:22.875263 | orchestrator | Sunday 08 February 2026 03:41:17 +0000 (0:00:00.211) 0:00:26.839 ******* 2026-02-08 03:41:22.875268 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-08 03:41:22.875273 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-08 03:41:22.875278 | orchestrator | 2026-02-08 03:41:22.875295 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-08 03:41:22.875301 | orchestrator | Sunday 08 February 2026 03:41:17 +0000 (0:00:00.185) 0:00:27.025 ******* 2026-02-08 03:41:22.875306 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875311 | orchestrator | 2026-02-08 03:41:22.875316 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-08 03:41:22.875321 | orchestrator | Sunday 08 February 2026 03:41:17 +0000 (0:00:00.163) 0:00:27.188 ******* 2026-02-08 03:41:22.875326 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875331 | orchestrator | 2026-02-08 03:41:22.875336 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-08 03:41:22.875342 | orchestrator | Sunday 08 February 2026 03:41:17 +0000 (0:00:00.148) 0:00:27.337 ******* 2026-02-08 03:41:22.875347 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875352 | orchestrator | 2026-02-08 03:41:22.875357 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-08 03:41:22.875362 | orchestrator | Sunday 08 February 2026 03:41:17 +0000 (0:00:00.149) 0:00:27.486 ******* 2026-02-08 03:41:22.875367 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:41:22.875372 | orchestrator | 2026-02-08 03:41:22.875377 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-08 03:41:22.875383 | orchestrator | Sunday 08 February 2026 03:41:17 +0000 (0:00:00.151) 0:00:27.638 ******* 2026-02-08 03:41:22.875388 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f36c880-548c-5a66-856f-2c4e799d94fc'}}) 2026-02-08 03:41:22.875394 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '98a4cb59-dd7a-5ec9-b94d-174a40339046'}}) 2026-02-08 03:41:22.875399 | orchestrator | 2026-02-08 03:41:22.875404 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-08 03:41:22.875409 | orchestrator | Sunday 08 February 2026 03:41:18 +0000 (0:00:00.187) 0:00:27.826 ******* 2026-02-08 03:41:22.875415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f36c880-548c-5a66-856f-2c4e799d94fc'}})  2026-02-08 03:41:22.875437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '98a4cb59-dd7a-5ec9-b94d-174a40339046'}})  2026-02-08 03:41:22.875443 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875448 | orchestrator | 2026-02-08 03:41:22.875453 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-08 03:41:22.875458 | orchestrator | Sunday 08 February 2026 03:41:18 +0000 (0:00:00.168) 0:00:27.994 ******* 2026-02-08 03:41:22.875463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f36c880-548c-5a66-856f-2c4e799d94fc'}})  2026-02-08 03:41:22.875468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '98a4cb59-dd7a-5ec9-b94d-174a40339046'}})  2026-02-08 03:41:22.875474 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875479 | orchestrator | 2026-02-08 03:41:22.875484 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-08 03:41:22.875489 | orchestrator | Sunday 08 February 2026 03:41:18 +0000 (0:00:00.171) 0:00:28.165 ******* 2026-02-08 03:41:22.875494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f36c880-548c-5a66-856f-2c4e799d94fc'}})  2026-02-08 03:41:22.875499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '98a4cb59-dd7a-5ec9-b94d-174a40339046'}})  2026-02-08 03:41:22.875504 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875509 | orchestrator | 2026-02-08 03:41:22.875514 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-08 03:41:22.875519 | orchestrator | Sunday 08 February 2026 03:41:18 +0000 (0:00:00.162) 0:00:28.328 ******* 2026-02-08 03:41:22.875524 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:41:22.875529 | orchestrator | 2026-02-08 03:41:22.875534 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-08 03:41:22.875539 | orchestrator | Sunday 08 February 2026 03:41:18 +0000 (0:00:00.176) 0:00:28.505 ******* 2026-02-08 03:41:22.875544 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:41:22.875549 | orchestrator | 2026-02-08 03:41:22.875554 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-08 03:41:22.875559 | orchestrator | Sunday 08 February 2026 03:41:18 +0000 (0:00:00.145) 0:00:28.651 ******* 2026-02-08 03:41:22.875575 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875581 | orchestrator | 2026-02-08 03:41:22.875586 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-08 03:41:22.875591 | orchestrator | Sunday 08 February 2026 03:41:19 +0000 (0:00:00.372) 0:00:29.023 ******* 2026-02-08 03:41:22.875596 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875601 | orchestrator | 2026-02-08 03:41:22.875606 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-08 03:41:22.875611 | orchestrator | Sunday 08 February 2026 03:41:19 +0000 (0:00:00.144) 0:00:29.167 ******* 2026-02-08 03:41:22.875616 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875621 | orchestrator | 2026-02-08 03:41:22.875626 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-08 03:41:22.875631 | orchestrator | Sunday 08 February 2026 03:41:19 +0000 (0:00:00.154) 0:00:29.322 ******* 2026-02-08 03:41:22.875636 | orchestrator | ok: [testbed-node-4] => { 2026-02-08 03:41:22.875641 | orchestrator |  "ceph_osd_devices": { 2026-02-08 03:41:22.875646 | orchestrator |  "sdb": { 2026-02-08 03:41:22.875651 | orchestrator |  "osd_lvm_uuid": "1f36c880-548c-5a66-856f-2c4e799d94fc" 2026-02-08 03:41:22.875656 | orchestrator |  }, 2026-02-08 03:41:22.875661 | orchestrator |  "sdc": { 2026-02-08 03:41:22.875666 | orchestrator |  "osd_lvm_uuid": "98a4cb59-dd7a-5ec9-b94d-174a40339046" 2026-02-08 03:41:22.875671 | orchestrator |  } 2026-02-08 03:41:22.875676 | orchestrator |  } 2026-02-08 03:41:22.875684 | orchestrator | } 2026-02-08 03:41:22.875694 | orchestrator | 2026-02-08 03:41:22.875700 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-08 03:41:22.875705 | orchestrator | Sunday 08 February 2026 03:41:19 +0000 (0:00:00.143) 0:00:29.465 ******* 2026-02-08 03:41:22.875710 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875715 | orchestrator | 2026-02-08 03:41:22.875720 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-08 03:41:22.875725 | orchestrator | Sunday 08 February 2026 03:41:19 +0000 (0:00:00.147) 0:00:29.613 ******* 2026-02-08 03:41:22.875730 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875735 | orchestrator | 2026-02-08 03:41:22.875740 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-08 03:41:22.875745 | orchestrator | Sunday 08 February 2026 03:41:20 +0000 (0:00:00.137) 0:00:29.750 ******* 2026-02-08 03:41:22.875750 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:41:22.875755 | orchestrator | 2026-02-08 03:41:22.875760 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-08 03:41:22.875765 | orchestrator | Sunday 08 February 2026 03:41:20 +0000 (0:00:00.152) 0:00:29.902 ******* 2026-02-08 03:41:22.875770 | orchestrator | changed: [testbed-node-4] => { 2026-02-08 03:41:22.875775 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-08 03:41:22.875780 | orchestrator |  "ceph_osd_devices": { 2026-02-08 03:41:22.875785 | orchestrator |  "sdb": { 2026-02-08 03:41:22.875790 | orchestrator |  "osd_lvm_uuid": "1f36c880-548c-5a66-856f-2c4e799d94fc" 2026-02-08 03:41:22.875795 | orchestrator |  }, 2026-02-08 03:41:22.875800 | orchestrator |  "sdc": { 2026-02-08 03:41:22.875805 | orchestrator |  "osd_lvm_uuid": "98a4cb59-dd7a-5ec9-b94d-174a40339046" 2026-02-08 03:41:22.875810 | orchestrator |  } 2026-02-08 03:41:22.875815 | orchestrator |  }, 2026-02-08 03:41:22.875820 | orchestrator |  "lvm_volumes": [ 2026-02-08 03:41:22.875825 | orchestrator |  { 2026-02-08 03:41:22.875830 | orchestrator |  "data": "osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc", 2026-02-08 03:41:22.875836 | orchestrator |  "data_vg": "ceph-1f36c880-548c-5a66-856f-2c4e799d94fc" 2026-02-08 03:41:22.875841 | orchestrator |  }, 2026-02-08 03:41:22.875845 | orchestrator |  { 2026-02-08 03:41:22.875850 | orchestrator |  "data": "osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046", 2026-02-08 03:41:22.875855 | orchestrator |  "data_vg": "ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046" 2026-02-08 03:41:22.875860 | orchestrator |  } 2026-02-08 03:41:22.875865 | orchestrator |  ] 2026-02-08 03:41:22.875871 | orchestrator |  } 2026-02-08 03:41:22.875895 | orchestrator | } 2026-02-08 03:41:22.875900 | orchestrator | 2026-02-08 03:41:22.875905 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-08 03:41:22.875910 | orchestrator | Sunday 08 February 2026 03:41:20 +0000 (0:00:00.215) 0:00:30.117 ******* 2026-02-08 03:41:22.875915 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-08 03:41:22.875920 | orchestrator | 2026-02-08 03:41:22.875926 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-08 03:41:22.875931 | orchestrator | 2026-02-08 03:41:22.875935 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-08 03:41:22.875940 | orchestrator | Sunday 08 February 2026 03:41:21 +0000 (0:00:01.412) 0:00:31.530 ******* 2026-02-08 03:41:22.875946 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-08 03:41:22.875951 | orchestrator | 2026-02-08 03:41:22.875956 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-08 03:41:22.875961 | orchestrator | Sunday 08 February 2026 03:41:22 +0000 (0:00:00.300) 0:00:31.831 ******* 2026-02-08 03:41:22.875966 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:41:22.875971 | orchestrator | 2026-02-08 03:41:22.875976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:22.875981 | orchestrator | Sunday 08 February 2026 03:41:22 +0000 (0:00:00.263) 0:00:32.094 ******* 2026-02-08 03:41:22.875990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-08 03:41:22.875995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-08 03:41:22.876000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-08 03:41:22.876005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-08 03:41:22.876010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-08 03:41:22.876018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-08 03:41:31.985570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-08 03:41:31.985675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-08 03:41:31.985689 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-08 03:41:31.985699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-08 03:41:31.985708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-08 03:41:31.985717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-08 03:41:31.985726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-08 03:41:31.985735 | orchestrator | 2026-02-08 03:41:31.985745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.985754 | orchestrator | Sunday 08 February 2026 03:41:22 +0000 (0:00:00.427) 0:00:32.521 ******* 2026-02-08 03:41:31.985763 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.985773 | orchestrator | 2026-02-08 03:41:31.985798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.985807 | orchestrator | Sunday 08 February 2026 03:41:23 +0000 (0:00:00.226) 0:00:32.748 ******* 2026-02-08 03:41:31.985816 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.985824 | orchestrator | 2026-02-08 03:41:31.985833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.985841 | orchestrator | Sunday 08 February 2026 03:41:23 +0000 (0:00:00.219) 0:00:32.968 ******* 2026-02-08 03:41:31.985850 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.985859 | orchestrator | 2026-02-08 03:41:31.985867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.985876 | orchestrator | Sunday 08 February 2026 03:41:23 +0000 (0:00:00.207) 0:00:33.175 ******* 2026-02-08 03:41:31.985939 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.985949 | orchestrator | 2026-02-08 03:41:31.985958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.985967 | orchestrator | Sunday 08 February 2026 03:41:23 +0000 (0:00:00.238) 0:00:33.413 ******* 2026-02-08 03:41:31.985975 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.985991 | orchestrator | 2026-02-08 03:41:31.986012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.986092 | orchestrator | Sunday 08 February 2026 03:41:23 +0000 (0:00:00.221) 0:00:33.634 ******* 2026-02-08 03:41:31.986109 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986126 | orchestrator | 2026-02-08 03:41:31.986142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.986157 | orchestrator | Sunday 08 February 2026 03:41:24 +0000 (0:00:00.224) 0:00:33.859 ******* 2026-02-08 03:41:31.986178 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986194 | orchestrator | 2026-02-08 03:41:31.986209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.986220 | orchestrator | Sunday 08 February 2026 03:41:24 +0000 (0:00:00.726) 0:00:34.585 ******* 2026-02-08 03:41:31.986255 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986265 | orchestrator | 2026-02-08 03:41:31.986275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.986285 | orchestrator | Sunday 08 February 2026 03:41:25 +0000 (0:00:00.220) 0:00:34.806 ******* 2026-02-08 03:41:31.986294 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d) 2026-02-08 03:41:31.986306 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d) 2026-02-08 03:41:31.986315 | orchestrator | 2026-02-08 03:41:31.986325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.986336 | orchestrator | Sunday 08 February 2026 03:41:25 +0000 (0:00:00.492) 0:00:35.298 ******* 2026-02-08 03:41:31.986346 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02) 2026-02-08 03:41:31.986356 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02) 2026-02-08 03:41:31.986366 | orchestrator | 2026-02-08 03:41:31.986376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.986386 | orchestrator | Sunday 08 February 2026 03:41:26 +0000 (0:00:00.462) 0:00:35.761 ******* 2026-02-08 03:41:31.986395 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a) 2026-02-08 03:41:31.986406 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a) 2026-02-08 03:41:31.986416 | orchestrator | 2026-02-08 03:41:31.986426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.986436 | orchestrator | Sunday 08 February 2026 03:41:26 +0000 (0:00:00.472) 0:00:36.233 ******* 2026-02-08 03:41:31.986446 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f) 2026-02-08 03:41:31.986456 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f) 2026-02-08 03:41:31.986467 | orchestrator | 2026-02-08 03:41:31.986477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:41:31.986487 | orchestrator | Sunday 08 February 2026 03:41:27 +0000 (0:00:00.495) 0:00:36.728 ******* 2026-02-08 03:41:31.986495 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-08 03:41:31.986504 | orchestrator | 2026-02-08 03:41:31.986512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986537 | orchestrator | Sunday 08 February 2026 03:41:27 +0000 (0:00:00.392) 0:00:37.121 ******* 2026-02-08 03:41:31.986546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-08 03:41:31.986555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-08 03:41:31.986563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-08 03:41:31.986572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-08 03:41:31.986580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-08 03:41:31.986589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-08 03:41:31.986597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-08 03:41:31.986606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-08 03:41:31.986614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-08 03:41:31.986630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-08 03:41:31.986638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-08 03:41:31.986654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-08 03:41:31.986662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-08 03:41:31.986671 | orchestrator | 2026-02-08 03:41:31.986680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986688 | orchestrator | Sunday 08 February 2026 03:41:27 +0000 (0:00:00.417) 0:00:37.538 ******* 2026-02-08 03:41:31.986697 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986705 | orchestrator | 2026-02-08 03:41:31.986714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986723 | orchestrator | Sunday 08 February 2026 03:41:28 +0000 (0:00:00.229) 0:00:37.767 ******* 2026-02-08 03:41:31.986731 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986740 | orchestrator | 2026-02-08 03:41:31.986748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986757 | orchestrator | Sunday 08 February 2026 03:41:28 +0000 (0:00:00.215) 0:00:37.983 ******* 2026-02-08 03:41:31.986765 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986774 | orchestrator | 2026-02-08 03:41:31.986783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986792 | orchestrator | Sunday 08 February 2026 03:41:29 +0000 (0:00:00.754) 0:00:38.737 ******* 2026-02-08 03:41:31.986800 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986809 | orchestrator | 2026-02-08 03:41:31.986818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986826 | orchestrator | Sunday 08 February 2026 03:41:29 +0000 (0:00:00.223) 0:00:38.961 ******* 2026-02-08 03:41:31.986835 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986843 | orchestrator | 2026-02-08 03:41:31.986852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986861 | orchestrator | Sunday 08 February 2026 03:41:29 +0000 (0:00:00.223) 0:00:39.184 ******* 2026-02-08 03:41:31.986869 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986878 | orchestrator | 2026-02-08 03:41:31.986913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986922 | orchestrator | Sunday 08 February 2026 03:41:29 +0000 (0:00:00.237) 0:00:39.421 ******* 2026-02-08 03:41:31.986931 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986940 | orchestrator | 2026-02-08 03:41:31.986948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986957 | orchestrator | Sunday 08 February 2026 03:41:30 +0000 (0:00:00.254) 0:00:39.676 ******* 2026-02-08 03:41:31.986965 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.986974 | orchestrator | 2026-02-08 03:41:31.986982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.986991 | orchestrator | Sunday 08 February 2026 03:41:30 +0000 (0:00:00.293) 0:00:39.969 ******* 2026-02-08 03:41:31.987000 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-08 03:41:31.987009 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-08 03:41:31.987018 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-08 03:41:31.987027 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-08 03:41:31.987035 | orchestrator | 2026-02-08 03:41:31.987044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.987052 | orchestrator | Sunday 08 February 2026 03:41:31 +0000 (0:00:00.721) 0:00:40.691 ******* 2026-02-08 03:41:31.987061 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.987070 | orchestrator | 2026-02-08 03:41:31.987078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.987087 | orchestrator | Sunday 08 February 2026 03:41:31 +0000 (0:00:00.250) 0:00:40.942 ******* 2026-02-08 03:41:31.987095 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.987104 | orchestrator | 2026-02-08 03:41:31.987113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.987127 | orchestrator | Sunday 08 February 2026 03:41:31 +0000 (0:00:00.235) 0:00:41.178 ******* 2026-02-08 03:41:31.987136 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.987144 | orchestrator | 2026-02-08 03:41:31.987158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:41:31.987172 | orchestrator | Sunday 08 February 2026 03:41:31 +0000 (0:00:00.236) 0:00:41.415 ******* 2026-02-08 03:41:31.987187 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:31.987201 | orchestrator | 2026-02-08 03:41:31.987223 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-08 03:41:36.677390 | orchestrator | Sunday 08 February 2026 03:41:31 +0000 (0:00:00.216) 0:00:41.632 ******* 2026-02-08 03:41:36.677487 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-08 03:41:36.677501 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-08 03:41:36.677512 | orchestrator | 2026-02-08 03:41:36.677524 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-08 03:41:36.677534 | orchestrator | Sunday 08 February 2026 03:41:32 +0000 (0:00:00.421) 0:00:42.054 ******* 2026-02-08 03:41:36.677544 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.677554 | orchestrator | 2026-02-08 03:41:36.677564 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-08 03:41:36.677574 | orchestrator | Sunday 08 February 2026 03:41:32 +0000 (0:00:00.151) 0:00:42.205 ******* 2026-02-08 03:41:36.677584 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.677594 | orchestrator | 2026-02-08 03:41:36.677603 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-08 03:41:36.677613 | orchestrator | Sunday 08 February 2026 03:41:32 +0000 (0:00:00.155) 0:00:42.360 ******* 2026-02-08 03:41:36.677623 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.677632 | orchestrator | 2026-02-08 03:41:36.677658 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-08 03:41:36.677668 | orchestrator | Sunday 08 February 2026 03:41:32 +0000 (0:00:00.162) 0:00:42.523 ******* 2026-02-08 03:41:36.677678 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:41:36.677688 | orchestrator | 2026-02-08 03:41:36.677698 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-08 03:41:36.677707 | orchestrator | Sunday 08 February 2026 03:41:33 +0000 (0:00:00.143) 0:00:42.666 ******* 2026-02-08 03:41:36.677717 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad89cb8-326d-5a7d-8045-6e04c12be05a'}}) 2026-02-08 03:41:36.677734 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3e05e81-e469-5668-9a53-5e8f92025307'}}) 2026-02-08 03:41:36.677752 | orchestrator | 2026-02-08 03:41:36.677768 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-08 03:41:36.677786 | orchestrator | Sunday 08 February 2026 03:41:33 +0000 (0:00:00.170) 0:00:42.837 ******* 2026-02-08 03:41:36.677798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad89cb8-326d-5a7d-8045-6e04c12be05a'}})  2026-02-08 03:41:36.677810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3e05e81-e469-5668-9a53-5e8f92025307'}})  2026-02-08 03:41:36.677820 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.677830 | orchestrator | 2026-02-08 03:41:36.677839 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-08 03:41:36.677849 | orchestrator | Sunday 08 February 2026 03:41:33 +0000 (0:00:00.167) 0:00:43.004 ******* 2026-02-08 03:41:36.677858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad89cb8-326d-5a7d-8045-6e04c12be05a'}})  2026-02-08 03:41:36.677868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3e05e81-e469-5668-9a53-5e8f92025307'}})  2026-02-08 03:41:36.677878 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.677939 | orchestrator | 2026-02-08 03:41:36.677953 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-08 03:41:36.677965 | orchestrator | Sunday 08 February 2026 03:41:33 +0000 (0:00:00.157) 0:00:43.162 ******* 2026-02-08 03:41:36.677976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad89cb8-326d-5a7d-8045-6e04c12be05a'}})  2026-02-08 03:41:36.677988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3e05e81-e469-5668-9a53-5e8f92025307'}})  2026-02-08 03:41:36.677999 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.678011 | orchestrator | 2026-02-08 03:41:36.678081 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-08 03:41:36.678100 | orchestrator | Sunday 08 February 2026 03:41:33 +0000 (0:00:00.179) 0:00:43.341 ******* 2026-02-08 03:41:36.678118 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:41:36.678136 | orchestrator | 2026-02-08 03:41:36.678153 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-08 03:41:36.678164 | orchestrator | Sunday 08 February 2026 03:41:33 +0000 (0:00:00.152) 0:00:43.493 ******* 2026-02-08 03:41:36.678178 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:41:36.678234 | orchestrator | 2026-02-08 03:41:36.678254 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-08 03:41:36.678271 | orchestrator | Sunday 08 February 2026 03:41:33 +0000 (0:00:00.148) 0:00:43.642 ******* 2026-02-08 03:41:36.678288 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.678304 | orchestrator | 2026-02-08 03:41:36.678320 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-08 03:41:36.678338 | orchestrator | Sunday 08 February 2026 03:41:34 +0000 (0:00:00.418) 0:00:44.060 ******* 2026-02-08 03:41:36.678355 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.678371 | orchestrator | 2026-02-08 03:41:36.678387 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-08 03:41:36.678404 | orchestrator | Sunday 08 February 2026 03:41:34 +0000 (0:00:00.151) 0:00:44.212 ******* 2026-02-08 03:41:36.678421 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.678438 | orchestrator | 2026-02-08 03:41:36.678455 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-08 03:41:36.678472 | orchestrator | Sunday 08 February 2026 03:41:34 +0000 (0:00:00.150) 0:00:44.362 ******* 2026-02-08 03:41:36.678490 | orchestrator | ok: [testbed-node-5] => { 2026-02-08 03:41:36.678508 | orchestrator |  "ceph_osd_devices": { 2026-02-08 03:41:36.678524 | orchestrator |  "sdb": { 2026-02-08 03:41:36.678567 | orchestrator |  "osd_lvm_uuid": "7ad89cb8-326d-5a7d-8045-6e04c12be05a" 2026-02-08 03:41:36.678584 | orchestrator |  }, 2026-02-08 03:41:36.678601 | orchestrator |  "sdc": { 2026-02-08 03:41:36.678619 | orchestrator |  "osd_lvm_uuid": "b3e05e81-e469-5668-9a53-5e8f92025307" 2026-02-08 03:41:36.678637 | orchestrator |  } 2026-02-08 03:41:36.678653 | orchestrator |  } 2026-02-08 03:41:36.678670 | orchestrator | } 2026-02-08 03:41:36.678687 | orchestrator | 2026-02-08 03:41:36.678704 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-08 03:41:36.678720 | orchestrator | Sunday 08 February 2026 03:41:34 +0000 (0:00:00.176) 0:00:44.539 ******* 2026-02-08 03:41:36.678738 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.678756 | orchestrator | 2026-02-08 03:41:36.678772 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-08 03:41:36.678787 | orchestrator | Sunday 08 February 2026 03:41:35 +0000 (0:00:00.148) 0:00:44.687 ******* 2026-02-08 03:41:36.678805 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.678822 | orchestrator | 2026-02-08 03:41:36.678837 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-08 03:41:36.678854 | orchestrator | Sunday 08 February 2026 03:41:35 +0000 (0:00:00.153) 0:00:44.841 ******* 2026-02-08 03:41:36.678871 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:41:36.678911 | orchestrator | 2026-02-08 03:41:36.678951 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-08 03:41:36.678971 | orchestrator | Sunday 08 February 2026 03:41:35 +0000 (0:00:00.150) 0:00:44.991 ******* 2026-02-08 03:41:36.678988 | orchestrator | changed: [testbed-node-5] => { 2026-02-08 03:41:36.679005 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-08 03:41:36.679040 | orchestrator |  "ceph_osd_devices": { 2026-02-08 03:41:36.679072 | orchestrator |  "sdb": { 2026-02-08 03:41:36.679089 | orchestrator |  "osd_lvm_uuid": "7ad89cb8-326d-5a7d-8045-6e04c12be05a" 2026-02-08 03:41:36.679106 | orchestrator |  }, 2026-02-08 03:41:36.679124 | orchestrator |  "sdc": { 2026-02-08 03:41:36.679141 | orchestrator |  "osd_lvm_uuid": "b3e05e81-e469-5668-9a53-5e8f92025307" 2026-02-08 03:41:36.679158 | orchestrator |  } 2026-02-08 03:41:36.679175 | orchestrator |  }, 2026-02-08 03:41:36.679192 | orchestrator |  "lvm_volumes": [ 2026-02-08 03:41:36.679210 | orchestrator |  { 2026-02-08 03:41:36.679231 | orchestrator |  "data": "osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a", 2026-02-08 03:41:36.679249 | orchestrator |  "data_vg": "ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a" 2026-02-08 03:41:36.679265 | orchestrator |  }, 2026-02-08 03:41:36.679283 | orchestrator |  { 2026-02-08 03:41:36.679299 | orchestrator |  "data": "osd-block-b3e05e81-e469-5668-9a53-5e8f92025307", 2026-02-08 03:41:36.679315 | orchestrator |  "data_vg": "ceph-b3e05e81-e469-5668-9a53-5e8f92025307" 2026-02-08 03:41:36.679332 | orchestrator |  } 2026-02-08 03:41:36.679348 | orchestrator |  ] 2026-02-08 03:41:36.679366 | orchestrator |  } 2026-02-08 03:41:36.679382 | orchestrator | } 2026-02-08 03:41:36.679399 | orchestrator | 2026-02-08 03:41:36.679409 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-08 03:41:36.679418 | orchestrator | Sunday 08 February 2026 03:41:35 +0000 (0:00:00.246) 0:00:45.238 ******* 2026-02-08 03:41:36.679428 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-08 03:41:36.679438 | orchestrator | 2026-02-08 03:41:36.679447 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:41:36.679457 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-08 03:41:36.679468 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-08 03:41:36.679477 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-08 03:41:36.679487 | orchestrator | 2026-02-08 03:41:36.679496 | orchestrator | 2026-02-08 03:41:36.679506 | orchestrator | 2026-02-08 03:41:36.679515 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:41:36.679525 | orchestrator | Sunday 08 February 2026 03:41:36 +0000 (0:00:01.066) 0:00:46.305 ******* 2026-02-08 03:41:36.679534 | orchestrator | =============================================================================== 2026-02-08 03:41:36.679543 | orchestrator | Write configuration file ------------------------------------------------ 4.36s 2026-02-08 03:41:36.679553 | orchestrator | Add known links to the list of available block devices ------------------ 1.46s 2026-02-08 03:41:36.679562 | orchestrator | Add known partitions to the list of available block devices ------------- 1.30s 2026-02-08 03:41:36.679571 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2026-02-08 03:41:36.679581 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2026-02-08 03:41:36.679590 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-02-08 03:41:36.679599 | orchestrator | Set DB devices config data ---------------------------------------------- 0.94s 2026-02-08 03:41:36.679612 | orchestrator | Print configuration data ------------------------------------------------ 0.92s 2026-02-08 03:41:36.679638 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.90s 2026-02-08 03:41:36.679656 | orchestrator | Get initial list of available block devices ----------------------------- 0.85s 2026-02-08 03:41:36.679671 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.83s 2026-02-08 03:41:36.679688 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-02-08 03:41:36.679705 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-02-08 03:41:36.679734 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-02-08 03:41:37.134483 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-08 03:41:37.135735 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.71s 2026-02-08 03:41:37.135811 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-02-08 03:41:37.135825 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-08 03:41:37.135839 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-08 03:41:37.135851 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-02-08 03:41:59.683495 | orchestrator | 2026-02-08 03:41:59 | INFO  | Task 17ea4938-eec8-4063-984d-1d403868b4f3 (sync inventory) is running in background. Output coming soon. 2026-02-08 03:42:28.795113 | orchestrator | 2026-02-08 03:42:01 | INFO  | Starting group_vars file reorganization 2026-02-08 03:42:28.795254 | orchestrator | 2026-02-08 03:42:01 | INFO  | Moved 0 file(s) to their respective directories 2026-02-08 03:42:28.795273 | orchestrator | 2026-02-08 03:42:01 | INFO  | Group_vars file reorganization completed 2026-02-08 03:42:28.795286 | orchestrator | 2026-02-08 03:42:04 | INFO  | Starting variable preparation from inventory 2026-02-08 03:42:28.795297 | orchestrator | 2026-02-08 03:42:07 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-08 03:42:28.795309 | orchestrator | 2026-02-08 03:42:07 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-08 03:42:28.795320 | orchestrator | 2026-02-08 03:42:07 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-08 03:42:28.795331 | orchestrator | 2026-02-08 03:42:07 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-08 03:42:28.795342 | orchestrator | 2026-02-08 03:42:07 | INFO  | Variable preparation completed 2026-02-08 03:42:28.795353 | orchestrator | 2026-02-08 03:42:08 | INFO  | Starting inventory overwrite handling 2026-02-08 03:42:28.795363 | orchestrator | 2026-02-08 03:42:08 | INFO  | Handling group overwrites in 99-overwrite 2026-02-08 03:42:28.795374 | orchestrator | 2026-02-08 03:42:08 | INFO  | Removing group frr:children from 60-generic 2026-02-08 03:42:28.795385 | orchestrator | 2026-02-08 03:42:08 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-08 03:42:28.795396 | orchestrator | 2026-02-08 03:42:08 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-08 03:42:28.795407 | orchestrator | 2026-02-08 03:42:08 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-08 03:42:28.795417 | orchestrator | 2026-02-08 03:42:08 | INFO  | Handling group overwrites in 20-roles 2026-02-08 03:42:28.795428 | orchestrator | 2026-02-08 03:42:08 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-08 03:42:28.795439 | orchestrator | 2026-02-08 03:42:08 | INFO  | Removed 5 group(s) in total 2026-02-08 03:42:28.795450 | orchestrator | 2026-02-08 03:42:08 | INFO  | Inventory overwrite handling completed 2026-02-08 03:42:28.795460 | orchestrator | 2026-02-08 03:42:10 | INFO  | Starting merge of inventory files 2026-02-08 03:42:28.795496 | orchestrator | 2026-02-08 03:42:10 | INFO  | Inventory files merged successfully 2026-02-08 03:42:28.795508 | orchestrator | 2026-02-08 03:42:15 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-08 03:42:28.795518 | orchestrator | 2026-02-08 03:42:27 | INFO  | Successfully wrote ClusterShell configuration 2026-02-08 03:42:28.795530 | orchestrator | [master ff74d31] 2026-02-08-03-42 2026-02-08 03:42:28.795542 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-08 03:42:31.391591 | orchestrator | 2026-02-08 03:42:31 | INFO  | Task aa9e8bba-e940-43d0-8359-5f1a4c3a9239 (ceph-create-lvm-devices) was prepared for execution. 2026-02-08 03:42:31.391698 | orchestrator | 2026-02-08 03:42:31 | INFO  | It takes a moment until task aa9e8bba-e940-43d0-8359-5f1a4c3a9239 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-08 03:42:44.116993 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-08 03:42:44.117083 | orchestrator | 2.16.14 2026-02-08 03:42:44.117092 | orchestrator | 2026-02-08 03:42:44.117097 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-08 03:42:44.117103 | orchestrator | 2026-02-08 03:42:44.117107 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-08 03:42:44.117111 | orchestrator | Sunday 08 February 2026 03:42:36 +0000 (0:00:00.322) 0:00:00.322 ******* 2026-02-08 03:42:44.117116 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 03:42:44.117120 | orchestrator | 2026-02-08 03:42:44.117124 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-08 03:42:44.117128 | orchestrator | Sunday 08 February 2026 03:42:36 +0000 (0:00:00.254) 0:00:00.576 ******* 2026-02-08 03:42:44.117132 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:42:44.117137 | orchestrator | 2026-02-08 03:42:44.117141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117145 | orchestrator | Sunday 08 February 2026 03:42:36 +0000 (0:00:00.247) 0:00:00.824 ******* 2026-02-08 03:42:44.117149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-08 03:42:44.117153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-08 03:42:44.117156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-08 03:42:44.117160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-08 03:42:44.117164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-08 03:42:44.117168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-08 03:42:44.117171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-08 03:42:44.117187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-08 03:42:44.117191 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-08 03:42:44.117195 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-08 03:42:44.117199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-08 03:42:44.117202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-08 03:42:44.117206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-08 03:42:44.117210 | orchestrator | 2026-02-08 03:42:44.117213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117217 | orchestrator | Sunday 08 February 2026 03:42:37 +0000 (0:00:00.543) 0:00:01.368 ******* 2026-02-08 03:42:44.117236 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117240 | orchestrator | 2026-02-08 03:42:44.117244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117248 | orchestrator | Sunday 08 February 2026 03:42:37 +0000 (0:00:00.231) 0:00:01.600 ******* 2026-02-08 03:42:44.117252 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117255 | orchestrator | 2026-02-08 03:42:44.117259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117263 | orchestrator | Sunday 08 February 2026 03:42:37 +0000 (0:00:00.228) 0:00:01.828 ******* 2026-02-08 03:42:44.117266 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117270 | orchestrator | 2026-02-08 03:42:44.117274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117277 | orchestrator | Sunday 08 February 2026 03:42:37 +0000 (0:00:00.207) 0:00:02.036 ******* 2026-02-08 03:42:44.117281 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117285 | orchestrator | 2026-02-08 03:42:44.117288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117292 | orchestrator | Sunday 08 February 2026 03:42:37 +0000 (0:00:00.220) 0:00:02.256 ******* 2026-02-08 03:42:44.117296 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117299 | orchestrator | 2026-02-08 03:42:44.117303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117307 | orchestrator | Sunday 08 February 2026 03:42:38 +0000 (0:00:00.239) 0:00:02.495 ******* 2026-02-08 03:42:44.117310 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117314 | orchestrator | 2026-02-08 03:42:44.117317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117322 | orchestrator | Sunday 08 February 2026 03:42:38 +0000 (0:00:00.218) 0:00:02.713 ******* 2026-02-08 03:42:44.117325 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117329 | orchestrator | 2026-02-08 03:42:44.117333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117336 | orchestrator | Sunday 08 February 2026 03:42:38 +0000 (0:00:00.220) 0:00:02.934 ******* 2026-02-08 03:42:44.117340 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117344 | orchestrator | 2026-02-08 03:42:44.117347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117351 | orchestrator | Sunday 08 February 2026 03:42:38 +0000 (0:00:00.229) 0:00:03.163 ******* 2026-02-08 03:42:44.117355 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f) 2026-02-08 03:42:44.117368 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f) 2026-02-08 03:42:44.117372 | orchestrator | 2026-02-08 03:42:44.117375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117396 | orchestrator | Sunday 08 February 2026 03:42:39 +0000 (0:00:00.479) 0:00:03.643 ******* 2026-02-08 03:42:44.117401 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1) 2026-02-08 03:42:44.117404 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1) 2026-02-08 03:42:44.117408 | orchestrator | 2026-02-08 03:42:44.117412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117415 | orchestrator | Sunday 08 February 2026 03:42:40 +0000 (0:00:00.729) 0:00:04.372 ******* 2026-02-08 03:42:44.117419 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e) 2026-02-08 03:42:44.117423 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e) 2026-02-08 03:42:44.117427 | orchestrator | 2026-02-08 03:42:44.117430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117434 | orchestrator | Sunday 08 February 2026 03:42:40 +0000 (0:00:00.751) 0:00:05.124 ******* 2026-02-08 03:42:44.117438 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055) 2026-02-08 03:42:44.117445 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055) 2026-02-08 03:42:44.117449 | orchestrator | 2026-02-08 03:42:44.117453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:42:44.117457 | orchestrator | Sunday 08 February 2026 03:42:41 +0000 (0:00:00.912) 0:00:06.036 ******* 2026-02-08 03:42:44.117460 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-08 03:42:44.117464 | orchestrator | 2026-02-08 03:42:44.117468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:44.117472 | orchestrator | Sunday 08 February 2026 03:42:42 +0000 (0:00:00.363) 0:00:06.400 ******* 2026-02-08 03:42:44.117475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-08 03:42:44.117482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-08 03:42:44.117486 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-08 03:42:44.117490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-08 03:42:44.117493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-08 03:42:44.117497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-08 03:42:44.117501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-08 03:42:44.117504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-08 03:42:44.117508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-08 03:42:44.117512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-08 03:42:44.117515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-08 03:42:44.117519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-08 03:42:44.117523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-08 03:42:44.117526 | orchestrator | 2026-02-08 03:42:44.117530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:44.117534 | orchestrator | Sunday 08 February 2026 03:42:42 +0000 (0:00:00.459) 0:00:06.860 ******* 2026-02-08 03:42:44.117537 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117541 | orchestrator | 2026-02-08 03:42:44.117545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:44.117549 | orchestrator | Sunday 08 February 2026 03:42:42 +0000 (0:00:00.212) 0:00:07.072 ******* 2026-02-08 03:42:44.117552 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117556 | orchestrator | 2026-02-08 03:42:44.117560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:44.117563 | orchestrator | Sunday 08 February 2026 03:42:42 +0000 (0:00:00.209) 0:00:07.282 ******* 2026-02-08 03:42:44.117569 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117575 | orchestrator | 2026-02-08 03:42:44.117581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:44.117587 | orchestrator | Sunday 08 February 2026 03:42:43 +0000 (0:00:00.215) 0:00:07.497 ******* 2026-02-08 03:42:44.117593 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117599 | orchestrator | 2026-02-08 03:42:44.117604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:44.117610 | orchestrator | Sunday 08 February 2026 03:42:43 +0000 (0:00:00.214) 0:00:07.712 ******* 2026-02-08 03:42:44.117616 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117621 | orchestrator | 2026-02-08 03:42:44.117631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:44.117637 | orchestrator | Sunday 08 February 2026 03:42:43 +0000 (0:00:00.228) 0:00:07.940 ******* 2026-02-08 03:42:44.117644 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117650 | orchestrator | 2026-02-08 03:42:44.117656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:44.117662 | orchestrator | Sunday 08 February 2026 03:42:43 +0000 (0:00:00.211) 0:00:08.152 ******* 2026-02-08 03:42:44.117668 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:44.117672 | orchestrator | 2026-02-08 03:42:44.117679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:52.490877 | orchestrator | Sunday 08 February 2026 03:42:44 +0000 (0:00:00.258) 0:00:08.411 ******* 2026-02-08 03:42:52.491042 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491055 | orchestrator | 2026-02-08 03:42:52.491064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:52.491071 | orchestrator | Sunday 08 February 2026 03:42:44 +0000 (0:00:00.735) 0:00:09.146 ******* 2026-02-08 03:42:52.491078 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-08 03:42:52.491086 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-08 03:42:52.491094 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-08 03:42:52.491101 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-08 03:42:52.491108 | orchestrator | 2026-02-08 03:42:52.491115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:52.491122 | orchestrator | Sunday 08 February 2026 03:42:45 +0000 (0:00:00.697) 0:00:09.844 ******* 2026-02-08 03:42:52.491142 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491148 | orchestrator | 2026-02-08 03:42:52.491155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:52.491162 | orchestrator | Sunday 08 February 2026 03:42:45 +0000 (0:00:00.224) 0:00:10.068 ******* 2026-02-08 03:42:52.491169 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491176 | orchestrator | 2026-02-08 03:42:52.491182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:52.491189 | orchestrator | Sunday 08 February 2026 03:42:45 +0000 (0:00:00.212) 0:00:10.280 ******* 2026-02-08 03:42:52.491196 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491203 | orchestrator | 2026-02-08 03:42:52.491209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:42:52.491217 | orchestrator | Sunday 08 February 2026 03:42:46 +0000 (0:00:00.226) 0:00:10.507 ******* 2026-02-08 03:42:52.491224 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491230 | orchestrator | 2026-02-08 03:42:52.491237 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-08 03:42:52.491244 | orchestrator | Sunday 08 February 2026 03:42:46 +0000 (0:00:00.209) 0:00:10.717 ******* 2026-02-08 03:42:52.491251 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491258 | orchestrator | 2026-02-08 03:42:52.491283 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-08 03:42:52.491290 | orchestrator | Sunday 08 February 2026 03:42:46 +0000 (0:00:00.161) 0:00:10.878 ******* 2026-02-08 03:42:52.491298 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '658e9559-2696-538a-a0a4-811fe95d0be4'}}) 2026-02-08 03:42:52.491305 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'edf9913e-48af-595a-836b-515c584cb757'}}) 2026-02-08 03:42:52.491312 | orchestrator | 2026-02-08 03:42:52.491318 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-08 03:42:52.491326 | orchestrator | Sunday 08 February 2026 03:42:46 +0000 (0:00:00.215) 0:00:11.094 ******* 2026-02-08 03:42:52.491334 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'}) 2026-02-08 03:42:52.491342 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'}) 2026-02-08 03:42:52.491372 | orchestrator | 2026-02-08 03:42:52.491379 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-08 03:42:52.491385 | orchestrator | Sunday 08 February 2026 03:42:48 +0000 (0:00:01.965) 0:00:13.060 ******* 2026-02-08 03:42:52.491391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:52.491399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:52.491405 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491411 | orchestrator | 2026-02-08 03:42:52.491418 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-08 03:42:52.491424 | orchestrator | Sunday 08 February 2026 03:42:48 +0000 (0:00:00.161) 0:00:13.221 ******* 2026-02-08 03:42:52.491430 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'}) 2026-02-08 03:42:52.491438 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'}) 2026-02-08 03:42:52.491445 | orchestrator | 2026-02-08 03:42:52.491452 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-08 03:42:52.491459 | orchestrator | Sunday 08 February 2026 03:42:50 +0000 (0:00:01.450) 0:00:14.672 ******* 2026-02-08 03:42:52.491466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:52.491473 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:52.491480 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491487 | orchestrator | 2026-02-08 03:42:52.491494 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-08 03:42:52.491501 | orchestrator | Sunday 08 February 2026 03:42:50 +0000 (0:00:00.164) 0:00:14.836 ******* 2026-02-08 03:42:52.491527 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491534 | orchestrator | 2026-02-08 03:42:52.491541 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-08 03:42:52.491548 | orchestrator | Sunday 08 February 2026 03:42:50 +0000 (0:00:00.362) 0:00:15.199 ******* 2026-02-08 03:42:52.491555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:52.491562 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:52.491569 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491576 | orchestrator | 2026-02-08 03:42:52.491583 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-08 03:42:52.491590 | orchestrator | Sunday 08 February 2026 03:42:51 +0000 (0:00:00.149) 0:00:15.348 ******* 2026-02-08 03:42:52.491596 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491603 | orchestrator | 2026-02-08 03:42:52.491611 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-08 03:42:52.491618 | orchestrator | Sunday 08 February 2026 03:42:51 +0000 (0:00:00.139) 0:00:15.487 ******* 2026-02-08 03:42:52.491625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:52.491632 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:52.491646 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491653 | orchestrator | 2026-02-08 03:42:52.491660 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-08 03:42:52.491667 | orchestrator | Sunday 08 February 2026 03:42:51 +0000 (0:00:00.183) 0:00:15.670 ******* 2026-02-08 03:42:52.491674 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491681 | orchestrator | 2026-02-08 03:42:52.491693 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-08 03:42:52.491700 | orchestrator | Sunday 08 February 2026 03:42:51 +0000 (0:00:00.144) 0:00:15.815 ******* 2026-02-08 03:42:52.491707 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:52.491714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:52.491721 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491728 | orchestrator | 2026-02-08 03:42:52.491735 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-08 03:42:52.491742 | orchestrator | Sunday 08 February 2026 03:42:51 +0000 (0:00:00.168) 0:00:15.983 ******* 2026-02-08 03:42:52.491748 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:42:52.491755 | orchestrator | 2026-02-08 03:42:52.491762 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-08 03:42:52.491769 | orchestrator | Sunday 08 February 2026 03:42:51 +0000 (0:00:00.167) 0:00:16.151 ******* 2026-02-08 03:42:52.491775 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:52.491782 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:52.491789 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491795 | orchestrator | 2026-02-08 03:42:52.491802 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-08 03:42:52.491808 | orchestrator | Sunday 08 February 2026 03:42:52 +0000 (0:00:00.160) 0:00:16.311 ******* 2026-02-08 03:42:52.491814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:52.491821 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:52.491828 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491834 | orchestrator | 2026-02-08 03:42:52.491841 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-08 03:42:52.491847 | orchestrator | Sunday 08 February 2026 03:42:52 +0000 (0:00:00.153) 0:00:16.464 ******* 2026-02-08 03:42:52.491854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:52.491860 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:52.491867 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491873 | orchestrator | 2026-02-08 03:42:52.491879 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-08 03:42:52.491885 | orchestrator | Sunday 08 February 2026 03:42:52 +0000 (0:00:00.171) 0:00:16.636 ******* 2026-02-08 03:42:52.491892 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:52.491898 | orchestrator | 2026-02-08 03:42:52.491904 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-08 03:42:52.491916 | orchestrator | Sunday 08 February 2026 03:42:52 +0000 (0:00:00.151) 0:00:16.788 ******* 2026-02-08 03:42:59.322501 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.322603 | orchestrator | 2026-02-08 03:42:59.322620 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-08 03:42:59.322634 | orchestrator | Sunday 08 February 2026 03:42:52 +0000 (0:00:00.157) 0:00:16.945 ******* 2026-02-08 03:42:59.322646 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.322658 | orchestrator | 2026-02-08 03:42:59.322669 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-08 03:42:59.322681 | orchestrator | Sunday 08 February 2026 03:42:53 +0000 (0:00:00.386) 0:00:17.331 ******* 2026-02-08 03:42:59.322693 | orchestrator | ok: [testbed-node-3] => { 2026-02-08 03:42:59.322706 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-08 03:42:59.322718 | orchestrator | } 2026-02-08 03:42:59.322731 | orchestrator | 2026-02-08 03:42:59.322742 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-08 03:42:59.322754 | orchestrator | Sunday 08 February 2026 03:42:53 +0000 (0:00:00.171) 0:00:17.503 ******* 2026-02-08 03:42:59.322765 | orchestrator | ok: [testbed-node-3] => { 2026-02-08 03:42:59.322776 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-08 03:42:59.322788 | orchestrator | } 2026-02-08 03:42:59.322817 | orchestrator | 2026-02-08 03:42:59.322828 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-08 03:42:59.322838 | orchestrator | Sunday 08 February 2026 03:42:53 +0000 (0:00:00.153) 0:00:17.657 ******* 2026-02-08 03:42:59.322849 | orchestrator | ok: [testbed-node-3] => { 2026-02-08 03:42:59.322859 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-08 03:42:59.322870 | orchestrator | } 2026-02-08 03:42:59.322880 | orchestrator | 2026-02-08 03:42:59.322891 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-08 03:42:59.322902 | orchestrator | Sunday 08 February 2026 03:42:53 +0000 (0:00:00.149) 0:00:17.806 ******* 2026-02-08 03:42:59.322912 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:42:59.322923 | orchestrator | 2026-02-08 03:42:59.322933 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-08 03:42:59.322995 | orchestrator | Sunday 08 February 2026 03:42:54 +0000 (0:00:00.696) 0:00:18.502 ******* 2026-02-08 03:42:59.323007 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:42:59.323018 | orchestrator | 2026-02-08 03:42:59.323045 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-08 03:42:59.323058 | orchestrator | Sunday 08 February 2026 03:42:54 +0000 (0:00:00.524) 0:00:19.027 ******* 2026-02-08 03:42:59.323069 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:42:59.323079 | orchestrator | 2026-02-08 03:42:59.323090 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-08 03:42:59.323101 | orchestrator | Sunday 08 February 2026 03:42:55 +0000 (0:00:00.516) 0:00:19.544 ******* 2026-02-08 03:42:59.323112 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:42:59.323122 | orchestrator | 2026-02-08 03:42:59.323132 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-08 03:42:59.323143 | orchestrator | Sunday 08 February 2026 03:42:55 +0000 (0:00:00.154) 0:00:19.698 ******* 2026-02-08 03:42:59.323152 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323162 | orchestrator | 2026-02-08 03:42:59.323171 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-08 03:42:59.323180 | orchestrator | Sunday 08 February 2026 03:42:55 +0000 (0:00:00.118) 0:00:19.817 ******* 2026-02-08 03:42:59.323190 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323199 | orchestrator | 2026-02-08 03:42:59.323208 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-08 03:42:59.323220 | orchestrator | Sunday 08 February 2026 03:42:55 +0000 (0:00:00.110) 0:00:19.927 ******* 2026-02-08 03:42:59.323230 | orchestrator | ok: [testbed-node-3] => { 2026-02-08 03:42:59.323241 | orchestrator |  "vgs_report": { 2026-02-08 03:42:59.323252 | orchestrator |  "vg": [] 2026-02-08 03:42:59.323262 | orchestrator |  } 2026-02-08 03:42:59.323296 | orchestrator | } 2026-02-08 03:42:59.323306 | orchestrator | 2026-02-08 03:42:59.323317 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-08 03:42:59.323327 | orchestrator | Sunday 08 February 2026 03:42:55 +0000 (0:00:00.153) 0:00:20.081 ******* 2026-02-08 03:42:59.323338 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323348 | orchestrator | 2026-02-08 03:42:59.323359 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-08 03:42:59.323370 | orchestrator | Sunday 08 February 2026 03:42:55 +0000 (0:00:00.145) 0:00:20.226 ******* 2026-02-08 03:42:59.323380 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323390 | orchestrator | 2026-02-08 03:42:59.323400 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-08 03:42:59.323410 | orchestrator | Sunday 08 February 2026 03:42:56 +0000 (0:00:00.382) 0:00:20.608 ******* 2026-02-08 03:42:59.323421 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323431 | orchestrator | 2026-02-08 03:42:59.323442 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-08 03:42:59.323452 | orchestrator | Sunday 08 February 2026 03:42:56 +0000 (0:00:00.155) 0:00:20.764 ******* 2026-02-08 03:42:59.323462 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323472 | orchestrator | 2026-02-08 03:42:59.323482 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-08 03:42:59.323493 | orchestrator | Sunday 08 February 2026 03:42:56 +0000 (0:00:00.137) 0:00:20.902 ******* 2026-02-08 03:42:59.323504 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323514 | orchestrator | 2026-02-08 03:42:59.323525 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-08 03:42:59.323536 | orchestrator | Sunday 08 February 2026 03:42:56 +0000 (0:00:00.158) 0:00:21.060 ******* 2026-02-08 03:42:59.323546 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323556 | orchestrator | 2026-02-08 03:42:59.323566 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-08 03:42:59.323576 | orchestrator | Sunday 08 February 2026 03:42:56 +0000 (0:00:00.154) 0:00:21.215 ******* 2026-02-08 03:42:59.323587 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323595 | orchestrator | 2026-02-08 03:42:59.323604 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-08 03:42:59.323614 | orchestrator | Sunday 08 February 2026 03:42:57 +0000 (0:00:00.146) 0:00:21.362 ******* 2026-02-08 03:42:59.323641 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323651 | orchestrator | 2026-02-08 03:42:59.323661 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-08 03:42:59.323671 | orchestrator | Sunday 08 February 2026 03:42:57 +0000 (0:00:00.147) 0:00:21.509 ******* 2026-02-08 03:42:59.323681 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323690 | orchestrator | 2026-02-08 03:42:59.323698 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-08 03:42:59.323708 | orchestrator | Sunday 08 February 2026 03:42:57 +0000 (0:00:00.149) 0:00:21.658 ******* 2026-02-08 03:42:59.323717 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323727 | orchestrator | 2026-02-08 03:42:59.323736 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-08 03:42:59.323747 | orchestrator | Sunday 08 February 2026 03:42:57 +0000 (0:00:00.161) 0:00:21.819 ******* 2026-02-08 03:42:59.323756 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323766 | orchestrator | 2026-02-08 03:42:59.323775 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-08 03:42:59.323784 | orchestrator | Sunday 08 February 2026 03:42:57 +0000 (0:00:00.137) 0:00:21.956 ******* 2026-02-08 03:42:59.323793 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323802 | orchestrator | 2026-02-08 03:42:59.323811 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-08 03:42:59.323821 | orchestrator | Sunday 08 February 2026 03:42:57 +0000 (0:00:00.148) 0:00:22.105 ******* 2026-02-08 03:42:59.323844 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323854 | orchestrator | 2026-02-08 03:42:59.323864 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-08 03:42:59.323874 | orchestrator | Sunday 08 February 2026 03:42:57 +0000 (0:00:00.151) 0:00:22.256 ******* 2026-02-08 03:42:59.323884 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323894 | orchestrator | 2026-02-08 03:42:59.323904 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-08 03:42:59.323913 | orchestrator | Sunday 08 February 2026 03:42:58 +0000 (0:00:00.369) 0:00:22.626 ******* 2026-02-08 03:42:59.323932 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:59.323945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:59.323955 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.323988 | orchestrator | 2026-02-08 03:42:59.323998 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-08 03:42:59.324009 | orchestrator | Sunday 08 February 2026 03:42:58 +0000 (0:00:00.157) 0:00:22.784 ******* 2026-02-08 03:42:59.324019 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:59.324029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:59.324040 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.324050 | orchestrator | 2026-02-08 03:42:59.324061 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-08 03:42:59.324072 | orchestrator | Sunday 08 February 2026 03:42:58 +0000 (0:00:00.169) 0:00:22.953 ******* 2026-02-08 03:42:59.324082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:59.324094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:59.324105 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.324115 | orchestrator | 2026-02-08 03:42:59.324125 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-08 03:42:59.324135 | orchestrator | Sunday 08 February 2026 03:42:58 +0000 (0:00:00.168) 0:00:23.121 ******* 2026-02-08 03:42:59.324145 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:59.324155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:59.324165 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.324177 | orchestrator | 2026-02-08 03:42:59.324188 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-08 03:42:59.324199 | orchestrator | Sunday 08 February 2026 03:42:58 +0000 (0:00:00.170) 0:00:23.292 ******* 2026-02-08 03:42:59.324210 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:42:59.324220 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:42:59.324230 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:42:59.324240 | orchestrator | 2026-02-08 03:42:59.324251 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-08 03:42:59.324261 | orchestrator | Sunday 08 February 2026 03:42:59 +0000 (0:00:00.173) 0:00:23.465 ******* 2026-02-08 03:42:59.324293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:43:05.167280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:43:05.167357 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:43:05.167365 | orchestrator | 2026-02-08 03:43:05.167370 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-08 03:43:05.167376 | orchestrator | Sunday 08 February 2026 03:42:59 +0000 (0:00:00.156) 0:00:23.622 ******* 2026-02-08 03:43:05.167381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:43:05.167385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:43:05.167389 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:43:05.167393 | orchestrator | 2026-02-08 03:43:05.167397 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-08 03:43:05.167401 | orchestrator | Sunday 08 February 2026 03:42:59 +0000 (0:00:00.182) 0:00:23.806 ******* 2026-02-08 03:43:05.167405 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:43:05.167409 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:43:05.167413 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:43:05.167416 | orchestrator | 2026-02-08 03:43:05.167432 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-08 03:43:05.167436 | orchestrator | Sunday 08 February 2026 03:42:59 +0000 (0:00:00.169) 0:00:23.975 ******* 2026-02-08 03:43:05.167440 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:43:05.167444 | orchestrator | 2026-02-08 03:43:05.167448 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-08 03:43:05.167452 | orchestrator | Sunday 08 February 2026 03:43:00 +0000 (0:00:00.581) 0:00:24.557 ******* 2026-02-08 03:43:05.167456 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:43:05.167459 | orchestrator | 2026-02-08 03:43:05.167463 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-08 03:43:05.167467 | orchestrator | Sunday 08 February 2026 03:43:00 +0000 (0:00:00.545) 0:00:25.102 ******* 2026-02-08 03:43:05.167471 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:43:05.167474 | orchestrator | 2026-02-08 03:43:05.167478 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-08 03:43:05.167482 | orchestrator | Sunday 08 February 2026 03:43:00 +0000 (0:00:00.156) 0:00:25.258 ******* 2026-02-08 03:43:05.167486 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'vg_name': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'}) 2026-02-08 03:43:05.167491 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'vg_name': 'ceph-edf9913e-48af-595a-836b-515c584cb757'}) 2026-02-08 03:43:05.167494 | orchestrator | 2026-02-08 03:43:05.167499 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-08 03:43:05.167503 | orchestrator | Sunday 08 February 2026 03:43:01 +0000 (0:00:00.213) 0:00:25.472 ******* 2026-02-08 03:43:05.167506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:43:05.167510 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:43:05.167529 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:43:05.167533 | orchestrator | 2026-02-08 03:43:05.167537 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-08 03:43:05.167540 | orchestrator | Sunday 08 February 2026 03:43:01 +0000 (0:00:00.434) 0:00:25.906 ******* 2026-02-08 03:43:05.167544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:43:05.167548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:43:05.167552 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:43:05.167555 | orchestrator | 2026-02-08 03:43:05.167559 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-08 03:43:05.167563 | orchestrator | Sunday 08 February 2026 03:43:01 +0000 (0:00:00.175) 0:00:26.082 ******* 2026-02-08 03:43:05.167567 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 03:43:05.167571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 03:43:05.167574 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:43:05.167578 | orchestrator | 2026-02-08 03:43:05.167582 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-08 03:43:05.167585 | orchestrator | Sunday 08 February 2026 03:43:01 +0000 (0:00:00.174) 0:00:26.257 ******* 2026-02-08 03:43:05.167598 | orchestrator | ok: [testbed-node-3] => { 2026-02-08 03:43:05.167602 | orchestrator |  "lvm_report": { 2026-02-08 03:43:05.167606 | orchestrator |  "lv": [ 2026-02-08 03:43:05.167610 | orchestrator |  { 2026-02-08 03:43:05.167614 | orchestrator |  "lv_name": "osd-block-658e9559-2696-538a-a0a4-811fe95d0be4", 2026-02-08 03:43:05.167618 | orchestrator |  "vg_name": "ceph-658e9559-2696-538a-a0a4-811fe95d0be4" 2026-02-08 03:43:05.167622 | orchestrator |  }, 2026-02-08 03:43:05.167626 | orchestrator |  { 2026-02-08 03:43:05.167630 | orchestrator |  "lv_name": "osd-block-edf9913e-48af-595a-836b-515c584cb757", 2026-02-08 03:43:05.167633 | orchestrator |  "vg_name": "ceph-edf9913e-48af-595a-836b-515c584cb757" 2026-02-08 03:43:05.167637 | orchestrator |  } 2026-02-08 03:43:05.167641 | orchestrator |  ], 2026-02-08 03:43:05.167645 | orchestrator |  "pv": [ 2026-02-08 03:43:05.167648 | orchestrator |  { 2026-02-08 03:43:05.167652 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-08 03:43:05.167656 | orchestrator |  "vg_name": "ceph-658e9559-2696-538a-a0a4-811fe95d0be4" 2026-02-08 03:43:05.167660 | orchestrator |  }, 2026-02-08 03:43:05.167663 | orchestrator |  { 2026-02-08 03:43:05.167667 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-08 03:43:05.167671 | orchestrator |  "vg_name": "ceph-edf9913e-48af-595a-836b-515c584cb757" 2026-02-08 03:43:05.167675 | orchestrator |  } 2026-02-08 03:43:05.167678 | orchestrator |  ] 2026-02-08 03:43:05.167682 | orchestrator |  } 2026-02-08 03:43:05.167686 | orchestrator | } 2026-02-08 03:43:05.167690 | orchestrator | 2026-02-08 03:43:05.167694 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-08 03:43:05.167698 | orchestrator | 2026-02-08 03:43:05.167702 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-08 03:43:05.167705 | orchestrator | Sunday 08 February 2026 03:43:02 +0000 (0:00:00.367) 0:00:26.624 ******* 2026-02-08 03:43:05.167709 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-08 03:43:05.167713 | orchestrator | 2026-02-08 03:43:05.167720 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-08 03:43:05.167723 | orchestrator | Sunday 08 February 2026 03:43:02 +0000 (0:00:00.276) 0:00:26.901 ******* 2026-02-08 03:43:05.167731 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:43:05.167735 | orchestrator | 2026-02-08 03:43:05.167739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:05.167742 | orchestrator | Sunday 08 February 2026 03:43:02 +0000 (0:00:00.267) 0:00:27.169 ******* 2026-02-08 03:43:05.167746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-08 03:43:05.167750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-08 03:43:05.167754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-08 03:43:05.167757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-08 03:43:05.167761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-08 03:43:05.167764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-08 03:43:05.167768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-08 03:43:05.167772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-08 03:43:05.167775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-08 03:43:05.167779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-08 03:43:05.167783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-08 03:43:05.167786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-08 03:43:05.167790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-08 03:43:05.167794 | orchestrator | 2026-02-08 03:43:05.167798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:05.167801 | orchestrator | Sunday 08 February 2026 03:43:03 +0000 (0:00:00.435) 0:00:27.604 ******* 2026-02-08 03:43:05.167805 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:05.167809 | orchestrator | 2026-02-08 03:43:05.167812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:05.167816 | orchestrator | Sunday 08 February 2026 03:43:03 +0000 (0:00:00.206) 0:00:27.810 ******* 2026-02-08 03:43:05.167820 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:05.167823 | orchestrator | 2026-02-08 03:43:05.167827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:05.167831 | orchestrator | Sunday 08 February 2026 03:43:04 +0000 (0:00:00.748) 0:00:28.559 ******* 2026-02-08 03:43:05.167834 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:05.167838 | orchestrator | 2026-02-08 03:43:05.167843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:05.167847 | orchestrator | Sunday 08 February 2026 03:43:04 +0000 (0:00:00.224) 0:00:28.783 ******* 2026-02-08 03:43:05.167852 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:05.167856 | orchestrator | 2026-02-08 03:43:05.167861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:05.167865 | orchestrator | Sunday 08 February 2026 03:43:04 +0000 (0:00:00.226) 0:00:29.010 ******* 2026-02-08 03:43:05.167869 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:05.167880 | orchestrator | 2026-02-08 03:43:05.167885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:05.167889 | orchestrator | Sunday 08 February 2026 03:43:04 +0000 (0:00:00.217) 0:00:29.228 ******* 2026-02-08 03:43:05.167894 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:05.167898 | orchestrator | 2026-02-08 03:43:05.167906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:17.246353 | orchestrator | Sunday 08 February 2026 03:43:05 +0000 (0:00:00.238) 0:00:29.467 ******* 2026-02-08 03:43:17.246509 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.246528 | orchestrator | 2026-02-08 03:43:17.246539 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:17.246549 | orchestrator | Sunday 08 February 2026 03:43:05 +0000 (0:00:00.221) 0:00:29.688 ******* 2026-02-08 03:43:17.246559 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.246569 | orchestrator | 2026-02-08 03:43:17.246578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:17.246588 | orchestrator | Sunday 08 February 2026 03:43:05 +0000 (0:00:00.217) 0:00:29.905 ******* 2026-02-08 03:43:17.246598 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8) 2026-02-08 03:43:17.246608 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8) 2026-02-08 03:43:17.246618 | orchestrator | 2026-02-08 03:43:17.246627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:17.246637 | orchestrator | Sunday 08 February 2026 03:43:06 +0000 (0:00:00.462) 0:00:30.368 ******* 2026-02-08 03:43:17.246647 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2) 2026-02-08 03:43:17.246657 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2) 2026-02-08 03:43:17.246668 | orchestrator | 2026-02-08 03:43:17.246686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:17.246702 | orchestrator | Sunday 08 February 2026 03:43:06 +0000 (0:00:00.463) 0:00:30.831 ******* 2026-02-08 03:43:17.246736 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea) 2026-02-08 03:43:17.246754 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea) 2026-02-08 03:43:17.246772 | orchestrator | 2026-02-08 03:43:17.246789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:17.246804 | orchestrator | Sunday 08 February 2026 03:43:07 +0000 (0:00:00.493) 0:00:31.325 ******* 2026-02-08 03:43:17.246821 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133) 2026-02-08 03:43:17.246832 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133) 2026-02-08 03:43:17.246841 | orchestrator | 2026-02-08 03:43:17.246851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:17.246860 | orchestrator | Sunday 08 February 2026 03:43:07 +0000 (0:00:00.763) 0:00:32.089 ******* 2026-02-08 03:43:17.246870 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-08 03:43:17.246880 | orchestrator | 2026-02-08 03:43:17.246892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.246904 | orchestrator | Sunday 08 February 2026 03:43:08 +0000 (0:00:00.673) 0:00:32.762 ******* 2026-02-08 03:43:17.246916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-08 03:43:17.246929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-08 03:43:17.246940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-08 03:43:17.246951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-08 03:43:17.246963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-08 03:43:17.246974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-08 03:43:17.247018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-08 03:43:17.247030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-08 03:43:17.247046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-08 03:43:17.247076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-08 03:43:17.247093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-08 03:43:17.247111 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-08 03:43:17.247128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-08 03:43:17.247146 | orchestrator | 2026-02-08 03:43:17.247163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247179 | orchestrator | Sunday 08 February 2026 03:43:09 +0000 (0:00:00.986) 0:00:33.749 ******* 2026-02-08 03:43:17.247192 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247203 | orchestrator | 2026-02-08 03:43:17.247215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247225 | orchestrator | Sunday 08 February 2026 03:43:09 +0000 (0:00:00.220) 0:00:33.969 ******* 2026-02-08 03:43:17.247235 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247244 | orchestrator | 2026-02-08 03:43:17.247254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247263 | orchestrator | Sunday 08 February 2026 03:43:09 +0000 (0:00:00.236) 0:00:34.206 ******* 2026-02-08 03:43:17.247273 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247282 | orchestrator | 2026-02-08 03:43:17.247311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247321 | orchestrator | Sunday 08 February 2026 03:43:10 +0000 (0:00:00.234) 0:00:34.440 ******* 2026-02-08 03:43:17.247331 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247340 | orchestrator | 2026-02-08 03:43:17.247350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247359 | orchestrator | Sunday 08 February 2026 03:43:10 +0000 (0:00:00.234) 0:00:34.675 ******* 2026-02-08 03:43:17.247369 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247378 | orchestrator | 2026-02-08 03:43:17.247388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247397 | orchestrator | Sunday 08 February 2026 03:43:10 +0000 (0:00:00.220) 0:00:34.896 ******* 2026-02-08 03:43:17.247413 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247429 | orchestrator | 2026-02-08 03:43:17.247446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247463 | orchestrator | Sunday 08 February 2026 03:43:10 +0000 (0:00:00.244) 0:00:35.140 ******* 2026-02-08 03:43:17.247480 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247497 | orchestrator | 2026-02-08 03:43:17.247513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247530 | orchestrator | Sunday 08 February 2026 03:43:11 +0000 (0:00:00.215) 0:00:35.355 ******* 2026-02-08 03:43:17.247540 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247549 | orchestrator | 2026-02-08 03:43:17.247559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247568 | orchestrator | Sunday 08 February 2026 03:43:11 +0000 (0:00:00.201) 0:00:35.556 ******* 2026-02-08 03:43:17.247577 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-08 03:43:17.247587 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-08 03:43:17.247597 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-08 03:43:17.247614 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-08 03:43:17.247624 | orchestrator | 2026-02-08 03:43:17.247633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247643 | orchestrator | Sunday 08 February 2026 03:43:12 +0000 (0:00:00.916) 0:00:36.473 ******* 2026-02-08 03:43:17.247652 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247661 | orchestrator | 2026-02-08 03:43:17.247671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247688 | orchestrator | Sunday 08 February 2026 03:43:12 +0000 (0:00:00.715) 0:00:37.189 ******* 2026-02-08 03:43:17.247697 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247707 | orchestrator | 2026-02-08 03:43:17.247716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247726 | orchestrator | Sunday 08 February 2026 03:43:13 +0000 (0:00:00.230) 0:00:37.419 ******* 2026-02-08 03:43:17.247735 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247745 | orchestrator | 2026-02-08 03:43:17.247755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:17.247769 | orchestrator | Sunday 08 February 2026 03:43:13 +0000 (0:00:00.227) 0:00:37.646 ******* 2026-02-08 03:43:17.247785 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247801 | orchestrator | 2026-02-08 03:43:17.247817 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-08 03:43:17.247834 | orchestrator | Sunday 08 February 2026 03:43:13 +0000 (0:00:00.210) 0:00:37.857 ******* 2026-02-08 03:43:17.247851 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.247868 | orchestrator | 2026-02-08 03:43:17.247885 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-08 03:43:17.247901 | orchestrator | Sunday 08 February 2026 03:43:13 +0000 (0:00:00.161) 0:00:38.019 ******* 2026-02-08 03:43:17.247919 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '1f36c880-548c-5a66-856f-2c4e799d94fc'}}) 2026-02-08 03:43:17.247930 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '98a4cb59-dd7a-5ec9-b94d-174a40339046'}}) 2026-02-08 03:43:17.247940 | orchestrator | 2026-02-08 03:43:17.247949 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-08 03:43:17.247959 | orchestrator | Sunday 08 February 2026 03:43:13 +0000 (0:00:00.197) 0:00:38.216 ******* 2026-02-08 03:43:17.247969 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'}) 2026-02-08 03:43:17.248021 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'}) 2026-02-08 03:43:17.248031 | orchestrator | 2026-02-08 03:43:17.248041 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-08 03:43:17.248051 | orchestrator | Sunday 08 February 2026 03:43:15 +0000 (0:00:01.758) 0:00:39.975 ******* 2026-02-08 03:43:17.248060 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:17.248071 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:17.248081 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:17.248091 | orchestrator | 2026-02-08 03:43:17.248100 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-08 03:43:17.248110 | orchestrator | Sunday 08 February 2026 03:43:15 +0000 (0:00:00.180) 0:00:40.155 ******* 2026-02-08 03:43:17.248120 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'}) 2026-02-08 03:43:17.248142 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'}) 2026-02-08 03:43:23.133797 | orchestrator | 2026-02-08 03:43:23.133917 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-08 03:43:23.133944 | orchestrator | Sunday 08 February 2026 03:43:17 +0000 (0:00:01.386) 0:00:41.542 ******* 2026-02-08 03:43:23.133966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:23.134139 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:23.134157 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134169 | orchestrator | 2026-02-08 03:43:23.134180 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-08 03:43:23.134191 | orchestrator | Sunday 08 February 2026 03:43:17 +0000 (0:00:00.167) 0:00:41.710 ******* 2026-02-08 03:43:23.134202 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134213 | orchestrator | 2026-02-08 03:43:23.134224 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-08 03:43:23.134234 | orchestrator | Sunday 08 February 2026 03:43:17 +0000 (0:00:00.139) 0:00:41.849 ******* 2026-02-08 03:43:23.134245 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:23.134271 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:23.134282 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134293 | orchestrator | 2026-02-08 03:43:23.134304 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-08 03:43:23.134315 | orchestrator | Sunday 08 February 2026 03:43:17 +0000 (0:00:00.163) 0:00:42.012 ******* 2026-02-08 03:43:23.134325 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134338 | orchestrator | 2026-02-08 03:43:23.134350 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-08 03:43:23.134362 | orchestrator | Sunday 08 February 2026 03:43:17 +0000 (0:00:00.148) 0:00:42.161 ******* 2026-02-08 03:43:23.134374 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:23.134386 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:23.134399 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134411 | orchestrator | 2026-02-08 03:43:23.134424 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-08 03:43:23.134436 | orchestrator | Sunday 08 February 2026 03:43:18 +0000 (0:00:00.400) 0:00:42.561 ******* 2026-02-08 03:43:23.134448 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134461 | orchestrator | 2026-02-08 03:43:23.134474 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-08 03:43:23.134487 | orchestrator | Sunday 08 February 2026 03:43:18 +0000 (0:00:00.145) 0:00:42.707 ******* 2026-02-08 03:43:23.134500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:23.134512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:23.134525 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134537 | orchestrator | 2026-02-08 03:43:23.134549 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-08 03:43:23.134562 | orchestrator | Sunday 08 February 2026 03:43:18 +0000 (0:00:00.155) 0:00:42.863 ******* 2026-02-08 03:43:23.134576 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:43:23.134589 | orchestrator | 2026-02-08 03:43:23.134601 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-08 03:43:23.134613 | orchestrator | Sunday 08 February 2026 03:43:18 +0000 (0:00:00.196) 0:00:43.059 ******* 2026-02-08 03:43:23.134626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:23.134646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:23.134658 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134671 | orchestrator | 2026-02-08 03:43:23.134684 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-08 03:43:23.134696 | orchestrator | Sunday 08 February 2026 03:43:18 +0000 (0:00:00.164) 0:00:43.223 ******* 2026-02-08 03:43:23.134707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:23.134718 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:23.134734 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134753 | orchestrator | 2026-02-08 03:43:23.134773 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-08 03:43:23.134818 | orchestrator | Sunday 08 February 2026 03:43:19 +0000 (0:00:00.166) 0:00:43.390 ******* 2026-02-08 03:43:23.134832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:23.134843 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:23.134863 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134881 | orchestrator | 2026-02-08 03:43:23.134898 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-08 03:43:23.134917 | orchestrator | Sunday 08 February 2026 03:43:19 +0000 (0:00:00.170) 0:00:43.560 ******* 2026-02-08 03:43:23.134936 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.134953 | orchestrator | 2026-02-08 03:43:23.134972 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-08 03:43:23.135020 | orchestrator | Sunday 08 February 2026 03:43:19 +0000 (0:00:00.135) 0:00:43.696 ******* 2026-02-08 03:43:23.135041 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.135059 | orchestrator | 2026-02-08 03:43:23.135077 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-08 03:43:23.135096 | orchestrator | Sunday 08 February 2026 03:43:19 +0000 (0:00:00.139) 0:00:43.835 ******* 2026-02-08 03:43:23.135112 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.135122 | orchestrator | 2026-02-08 03:43:23.135133 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-08 03:43:23.135152 | orchestrator | Sunday 08 February 2026 03:43:19 +0000 (0:00:00.138) 0:00:43.974 ******* 2026-02-08 03:43:23.135163 | orchestrator | ok: [testbed-node-4] => { 2026-02-08 03:43:23.135174 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-08 03:43:23.135185 | orchestrator | } 2026-02-08 03:43:23.135196 | orchestrator | 2026-02-08 03:43:23.135207 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-08 03:43:23.135218 | orchestrator | Sunday 08 February 2026 03:43:19 +0000 (0:00:00.194) 0:00:44.168 ******* 2026-02-08 03:43:23.135228 | orchestrator | ok: [testbed-node-4] => { 2026-02-08 03:43:23.135239 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-08 03:43:23.135250 | orchestrator | } 2026-02-08 03:43:23.135263 | orchestrator | 2026-02-08 03:43:23.135281 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-08 03:43:23.135299 | orchestrator | Sunday 08 February 2026 03:43:19 +0000 (0:00:00.139) 0:00:44.308 ******* 2026-02-08 03:43:23.135318 | orchestrator | ok: [testbed-node-4] => { 2026-02-08 03:43:23.135336 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-08 03:43:23.135356 | orchestrator | } 2026-02-08 03:43:23.135367 | orchestrator | 2026-02-08 03:43:23.135378 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-08 03:43:23.135398 | orchestrator | Sunday 08 February 2026 03:43:20 +0000 (0:00:00.389) 0:00:44.698 ******* 2026-02-08 03:43:23.135409 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:43:23.135420 | orchestrator | 2026-02-08 03:43:23.135430 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-08 03:43:23.135449 | orchestrator | Sunday 08 February 2026 03:43:20 +0000 (0:00:00.520) 0:00:45.218 ******* 2026-02-08 03:43:23.135468 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:43:23.135487 | orchestrator | 2026-02-08 03:43:23.135506 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-08 03:43:23.135518 | orchestrator | Sunday 08 February 2026 03:43:21 +0000 (0:00:00.512) 0:00:45.731 ******* 2026-02-08 03:43:23.135529 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:43:23.135539 | orchestrator | 2026-02-08 03:43:23.135550 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-08 03:43:23.135560 | orchestrator | Sunday 08 February 2026 03:43:21 +0000 (0:00:00.530) 0:00:46.261 ******* 2026-02-08 03:43:23.135571 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:43:23.135582 | orchestrator | 2026-02-08 03:43:23.135592 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-08 03:43:23.135603 | orchestrator | Sunday 08 February 2026 03:43:22 +0000 (0:00:00.171) 0:00:46.433 ******* 2026-02-08 03:43:23.135613 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.135624 | orchestrator | 2026-02-08 03:43:23.135635 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-08 03:43:23.135645 | orchestrator | Sunday 08 February 2026 03:43:22 +0000 (0:00:00.106) 0:00:46.539 ******* 2026-02-08 03:43:23.135656 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.135668 | orchestrator | 2026-02-08 03:43:23.135685 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-08 03:43:23.135701 | orchestrator | Sunday 08 February 2026 03:43:22 +0000 (0:00:00.119) 0:00:46.658 ******* 2026-02-08 03:43:23.135718 | orchestrator | ok: [testbed-node-4] => { 2026-02-08 03:43:23.135736 | orchestrator |  "vgs_report": { 2026-02-08 03:43:23.135753 | orchestrator |  "vg": [] 2026-02-08 03:43:23.135771 | orchestrator |  } 2026-02-08 03:43:23.135790 | orchestrator | } 2026-02-08 03:43:23.135801 | orchestrator | 2026-02-08 03:43:23.135812 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-08 03:43:23.135823 | orchestrator | Sunday 08 February 2026 03:43:22 +0000 (0:00:00.157) 0:00:46.816 ******* 2026-02-08 03:43:23.135833 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.135844 | orchestrator | 2026-02-08 03:43:23.135863 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-08 03:43:23.135880 | orchestrator | Sunday 08 February 2026 03:43:22 +0000 (0:00:00.170) 0:00:46.986 ******* 2026-02-08 03:43:23.135898 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.135916 | orchestrator | 2026-02-08 03:43:23.135935 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-08 03:43:23.135947 | orchestrator | Sunday 08 February 2026 03:43:22 +0000 (0:00:00.151) 0:00:47.138 ******* 2026-02-08 03:43:23.135958 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.135968 | orchestrator | 2026-02-08 03:43:23.136015 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-08 03:43:23.136039 | orchestrator | Sunday 08 February 2026 03:43:22 +0000 (0:00:00.132) 0:00:47.271 ******* 2026-02-08 03:43:23.136056 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:23.136076 | orchestrator | 2026-02-08 03:43:23.136109 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-08 03:43:28.367122 | orchestrator | Sunday 08 February 2026 03:43:23 +0000 (0:00:00.161) 0:00:47.433 ******* 2026-02-08 03:43:28.367217 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367229 | orchestrator | 2026-02-08 03:43:28.367238 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-08 03:43:28.367247 | orchestrator | Sunday 08 February 2026 03:43:23 +0000 (0:00:00.392) 0:00:47.825 ******* 2026-02-08 03:43:28.367279 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367299 | orchestrator | 2026-02-08 03:43:28.367315 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-08 03:43:28.367331 | orchestrator | Sunday 08 February 2026 03:43:23 +0000 (0:00:00.156) 0:00:47.982 ******* 2026-02-08 03:43:28.367344 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367358 | orchestrator | 2026-02-08 03:43:28.367371 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-08 03:43:28.367385 | orchestrator | Sunday 08 February 2026 03:43:23 +0000 (0:00:00.155) 0:00:48.137 ******* 2026-02-08 03:43:28.367399 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367413 | orchestrator | 2026-02-08 03:43:28.367427 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-08 03:43:28.367441 | orchestrator | Sunday 08 February 2026 03:43:23 +0000 (0:00:00.147) 0:00:48.284 ******* 2026-02-08 03:43:28.367455 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367470 | orchestrator | 2026-02-08 03:43:28.367484 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-08 03:43:28.367517 | orchestrator | Sunday 08 February 2026 03:43:24 +0000 (0:00:00.193) 0:00:48.478 ******* 2026-02-08 03:43:28.367534 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367549 | orchestrator | 2026-02-08 03:43:28.367565 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-08 03:43:28.367580 | orchestrator | Sunday 08 February 2026 03:43:24 +0000 (0:00:00.151) 0:00:48.629 ******* 2026-02-08 03:43:28.367590 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367597 | orchestrator | 2026-02-08 03:43:28.367606 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-08 03:43:28.367616 | orchestrator | Sunday 08 February 2026 03:43:24 +0000 (0:00:00.148) 0:00:48.777 ******* 2026-02-08 03:43:28.367625 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367635 | orchestrator | 2026-02-08 03:43:28.367644 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-08 03:43:28.367653 | orchestrator | Sunday 08 February 2026 03:43:24 +0000 (0:00:00.162) 0:00:48.940 ******* 2026-02-08 03:43:28.367663 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367672 | orchestrator | 2026-02-08 03:43:28.367681 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-08 03:43:28.367690 | orchestrator | Sunday 08 February 2026 03:43:24 +0000 (0:00:00.158) 0:00:49.098 ******* 2026-02-08 03:43:28.367699 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367708 | orchestrator | 2026-02-08 03:43:28.367717 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-08 03:43:28.367726 | orchestrator | Sunday 08 February 2026 03:43:24 +0000 (0:00:00.146) 0:00:49.245 ******* 2026-02-08 03:43:28.367737 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.367748 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:28.367770 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367780 | orchestrator | 2026-02-08 03:43:28.367789 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-08 03:43:28.367799 | orchestrator | Sunday 08 February 2026 03:43:25 +0000 (0:00:00.177) 0:00:49.422 ******* 2026-02-08 03:43:28.367808 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.367817 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:28.367826 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367835 | orchestrator | 2026-02-08 03:43:28.367845 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-08 03:43:28.367862 | orchestrator | Sunday 08 February 2026 03:43:25 +0000 (0:00:00.154) 0:00:49.577 ******* 2026-02-08 03:43:28.367872 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.367882 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:28.367891 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367901 | orchestrator | 2026-02-08 03:43:28.367910 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-08 03:43:28.367919 | orchestrator | Sunday 08 February 2026 03:43:25 +0000 (0:00:00.428) 0:00:50.005 ******* 2026-02-08 03:43:28.367927 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.367936 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:28.367944 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.367952 | orchestrator | 2026-02-08 03:43:28.367978 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-08 03:43:28.368003 | orchestrator | Sunday 08 February 2026 03:43:25 +0000 (0:00:00.173) 0:00:50.179 ******* 2026-02-08 03:43:28.368013 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.368021 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:28.368029 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.368037 | orchestrator | 2026-02-08 03:43:28.368045 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-08 03:43:28.368053 | orchestrator | Sunday 08 February 2026 03:43:26 +0000 (0:00:00.176) 0:00:50.356 ******* 2026-02-08 03:43:28.368061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.368069 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:28.368077 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.368085 | orchestrator | 2026-02-08 03:43:28.368098 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-08 03:43:28.368106 | orchestrator | Sunday 08 February 2026 03:43:26 +0000 (0:00:00.184) 0:00:50.540 ******* 2026-02-08 03:43:28.368114 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.368122 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:28.368130 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.368138 | orchestrator | 2026-02-08 03:43:28.368146 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-08 03:43:28.368154 | orchestrator | Sunday 08 February 2026 03:43:26 +0000 (0:00:00.190) 0:00:50.730 ******* 2026-02-08 03:43:28.368162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.368170 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:28.368178 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.368193 | orchestrator | 2026-02-08 03:43:28.368201 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-08 03:43:28.368209 | orchestrator | Sunday 08 February 2026 03:43:26 +0000 (0:00:00.176) 0:00:50.907 ******* 2026-02-08 03:43:28.368217 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:43:28.368225 | orchestrator | 2026-02-08 03:43:28.368233 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-08 03:43:28.368241 | orchestrator | Sunday 08 February 2026 03:43:27 +0000 (0:00:00.517) 0:00:51.425 ******* 2026-02-08 03:43:28.368249 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:43:28.368257 | orchestrator | 2026-02-08 03:43:28.368265 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-08 03:43:28.368273 | orchestrator | Sunday 08 February 2026 03:43:27 +0000 (0:00:00.537) 0:00:51.963 ******* 2026-02-08 03:43:28.368281 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:43:28.368289 | orchestrator | 2026-02-08 03:43:28.368297 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-08 03:43:28.368305 | orchestrator | Sunday 08 February 2026 03:43:27 +0000 (0:00:00.178) 0:00:52.141 ******* 2026-02-08 03:43:28.368313 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'vg_name': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'}) 2026-02-08 03:43:28.368322 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'vg_name': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'}) 2026-02-08 03:43:28.368330 | orchestrator | 2026-02-08 03:43:28.368338 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-08 03:43:28.368346 | orchestrator | Sunday 08 February 2026 03:43:28 +0000 (0:00:00.181) 0:00:52.323 ******* 2026-02-08 03:43:28.368354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.368362 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:28.368370 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:28.368378 | orchestrator | 2026-02-08 03:43:28.368385 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-08 03:43:28.368393 | orchestrator | Sunday 08 February 2026 03:43:28 +0000 (0:00:00.179) 0:00:52.502 ******* 2026-02-08 03:43:28.368402 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:28.368415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:34.827719 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:34.827835 | orchestrator | 2026-02-08 03:43:34.827852 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-08 03:43:34.827865 | orchestrator | Sunday 08 February 2026 03:43:28 +0000 (0:00:00.164) 0:00:52.667 ******* 2026-02-08 03:43:34.827877 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 03:43:34.827890 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 03:43:34.827901 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:43:34.827912 | orchestrator | 2026-02-08 03:43:34.827923 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-08 03:43:34.827934 | orchestrator | Sunday 08 February 2026 03:43:28 +0000 (0:00:00.320) 0:00:52.987 ******* 2026-02-08 03:43:34.827945 | orchestrator | ok: [testbed-node-4] => { 2026-02-08 03:43:34.827956 | orchestrator |  "lvm_report": { 2026-02-08 03:43:34.828098 | orchestrator |  "lv": [ 2026-02-08 03:43:34.828114 | orchestrator |  { 2026-02-08 03:43:34.828126 | orchestrator |  "lv_name": "osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc", 2026-02-08 03:43:34.828138 | orchestrator |  "vg_name": "ceph-1f36c880-548c-5a66-856f-2c4e799d94fc" 2026-02-08 03:43:34.828163 | orchestrator |  }, 2026-02-08 03:43:34.828175 | orchestrator |  { 2026-02-08 03:43:34.828185 | orchestrator |  "lv_name": "osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046", 2026-02-08 03:43:34.828196 | orchestrator |  "vg_name": "ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046" 2026-02-08 03:43:34.828207 | orchestrator |  } 2026-02-08 03:43:34.828218 | orchestrator |  ], 2026-02-08 03:43:34.828228 | orchestrator |  "pv": [ 2026-02-08 03:43:34.828240 | orchestrator |  { 2026-02-08 03:43:34.828253 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-08 03:43:34.828266 | orchestrator |  "vg_name": "ceph-1f36c880-548c-5a66-856f-2c4e799d94fc" 2026-02-08 03:43:34.828279 | orchestrator |  }, 2026-02-08 03:43:34.828292 | orchestrator |  { 2026-02-08 03:43:34.828304 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-08 03:43:34.828317 | orchestrator |  "vg_name": "ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046" 2026-02-08 03:43:34.828331 | orchestrator |  } 2026-02-08 03:43:34.828343 | orchestrator |  ] 2026-02-08 03:43:34.828356 | orchestrator |  } 2026-02-08 03:43:34.828369 | orchestrator | } 2026-02-08 03:43:34.828381 | orchestrator | 2026-02-08 03:43:34.828394 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-08 03:43:34.828407 | orchestrator | 2026-02-08 03:43:34.828420 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-08 03:43:34.828433 | orchestrator | Sunday 08 February 2026 03:43:28 +0000 (0:00:00.300) 0:00:53.288 ******* 2026-02-08 03:43:34.828446 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-08 03:43:34.828459 | orchestrator | 2026-02-08 03:43:34.828472 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-08 03:43:34.828485 | orchestrator | Sunday 08 February 2026 03:43:29 +0000 (0:00:00.269) 0:00:53.557 ******* 2026-02-08 03:43:34.828498 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:43:34.828510 | orchestrator | 2026-02-08 03:43:34.828524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.828537 | orchestrator | Sunday 08 February 2026 03:43:29 +0000 (0:00:00.245) 0:00:53.803 ******* 2026-02-08 03:43:34.828550 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-08 03:43:34.828563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-08 03:43:34.828575 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-08 03:43:34.828588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-08 03:43:34.828601 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-08 03:43:34.828614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-08 03:43:34.828624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-08 03:43:34.828635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-08 03:43:34.828646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-08 03:43:34.828657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-08 03:43:34.828668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-08 03:43:34.828678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-08 03:43:34.828694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-08 03:43:34.828726 | orchestrator | 2026-02-08 03:43:34.828744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.828764 | orchestrator | Sunday 08 February 2026 03:43:29 +0000 (0:00:00.421) 0:00:54.224 ******* 2026-02-08 03:43:34.828783 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:34.828802 | orchestrator | 2026-02-08 03:43:34.828820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.828837 | orchestrator | Sunday 08 February 2026 03:43:30 +0000 (0:00:00.217) 0:00:54.442 ******* 2026-02-08 03:43:34.828856 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:34.828875 | orchestrator | 2026-02-08 03:43:34.828894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.828938 | orchestrator | Sunday 08 February 2026 03:43:30 +0000 (0:00:00.221) 0:00:54.663 ******* 2026-02-08 03:43:34.828951 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:34.828962 | orchestrator | 2026-02-08 03:43:34.828972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.828983 | orchestrator | Sunday 08 February 2026 03:43:30 +0000 (0:00:00.208) 0:00:54.871 ******* 2026-02-08 03:43:34.829017 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:34.829029 | orchestrator | 2026-02-08 03:43:34.829040 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.829051 | orchestrator | Sunday 08 February 2026 03:43:31 +0000 (0:00:00.531) 0:00:55.403 ******* 2026-02-08 03:43:34.829062 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:34.829073 | orchestrator | 2026-02-08 03:43:34.829084 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.829095 | orchestrator | Sunday 08 February 2026 03:43:31 +0000 (0:00:00.197) 0:00:55.600 ******* 2026-02-08 03:43:34.829106 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:34.829117 | orchestrator | 2026-02-08 03:43:34.829127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.829138 | orchestrator | Sunday 08 February 2026 03:43:31 +0000 (0:00:00.222) 0:00:55.823 ******* 2026-02-08 03:43:34.829149 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:34.829160 | orchestrator | 2026-02-08 03:43:34.829171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.829182 | orchestrator | Sunday 08 February 2026 03:43:31 +0000 (0:00:00.187) 0:00:56.010 ******* 2026-02-08 03:43:34.829193 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:34.829204 | orchestrator | 2026-02-08 03:43:34.829215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.829225 | orchestrator | Sunday 08 February 2026 03:43:31 +0000 (0:00:00.220) 0:00:56.231 ******* 2026-02-08 03:43:34.829236 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d) 2026-02-08 03:43:34.829249 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d) 2026-02-08 03:43:34.829260 | orchestrator | 2026-02-08 03:43:34.829271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.829282 | orchestrator | Sunday 08 February 2026 03:43:32 +0000 (0:00:00.409) 0:00:56.641 ******* 2026-02-08 03:43:34.829293 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02) 2026-02-08 03:43:34.829304 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02) 2026-02-08 03:43:34.829315 | orchestrator | 2026-02-08 03:43:34.829326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.829336 | orchestrator | Sunday 08 February 2026 03:43:32 +0000 (0:00:00.485) 0:00:57.127 ******* 2026-02-08 03:43:34.829347 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a) 2026-02-08 03:43:34.829358 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a) 2026-02-08 03:43:34.829377 | orchestrator | 2026-02-08 03:43:34.829388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.829399 | orchestrator | Sunday 08 February 2026 03:43:33 +0000 (0:00:00.470) 0:00:57.597 ******* 2026-02-08 03:43:34.829410 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f) 2026-02-08 03:43:34.829421 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f) 2026-02-08 03:43:34.829432 | orchestrator | 2026-02-08 03:43:34.829443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-08 03:43:34.829454 | orchestrator | Sunday 08 February 2026 03:43:33 +0000 (0:00:00.488) 0:00:58.086 ******* 2026-02-08 03:43:34.829465 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-08 03:43:34.829476 | orchestrator | 2026-02-08 03:43:34.829487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:34.829497 | orchestrator | Sunday 08 February 2026 03:43:34 +0000 (0:00:00.359) 0:00:58.446 ******* 2026-02-08 03:43:34.829508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-08 03:43:34.829616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-08 03:43:34.829636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-08 03:43:34.829706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-08 03:43:34.829720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-08 03:43:34.829731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-08 03:43:34.829742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-08 03:43:34.829753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-08 03:43:34.829764 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-08 03:43:34.829774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-08 03:43:34.829786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-08 03:43:34.829846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-08 03:43:44.733889 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-08 03:43:44.733972 | orchestrator | 2026-02-08 03:43:44.733983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.733990 | orchestrator | Sunday 08 February 2026 03:43:34 +0000 (0:00:00.673) 0:00:59.119 ******* 2026-02-08 03:43:44.733998 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734069 | orchestrator | 2026-02-08 03:43:44.734082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734094 | orchestrator | Sunday 08 February 2026 03:43:35 +0000 (0:00:00.221) 0:00:59.340 ******* 2026-02-08 03:43:44.734104 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734115 | orchestrator | 2026-02-08 03:43:44.734125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734136 | orchestrator | Sunday 08 February 2026 03:43:35 +0000 (0:00:00.228) 0:00:59.569 ******* 2026-02-08 03:43:44.734147 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734159 | orchestrator | 2026-02-08 03:43:44.734170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734182 | orchestrator | Sunday 08 February 2026 03:43:35 +0000 (0:00:00.257) 0:00:59.827 ******* 2026-02-08 03:43:44.734193 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734228 | orchestrator | 2026-02-08 03:43:44.734254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734268 | orchestrator | Sunday 08 February 2026 03:43:35 +0000 (0:00:00.237) 0:01:00.064 ******* 2026-02-08 03:43:44.734281 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734294 | orchestrator | 2026-02-08 03:43:44.734307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734316 | orchestrator | Sunday 08 February 2026 03:43:36 +0000 (0:00:00.269) 0:01:00.333 ******* 2026-02-08 03:43:44.734322 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734329 | orchestrator | 2026-02-08 03:43:44.734335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734342 | orchestrator | Sunday 08 February 2026 03:43:36 +0000 (0:00:00.240) 0:01:00.574 ******* 2026-02-08 03:43:44.734349 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734376 | orchestrator | 2026-02-08 03:43:44.734383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734390 | orchestrator | Sunday 08 February 2026 03:43:36 +0000 (0:00:00.232) 0:01:00.806 ******* 2026-02-08 03:43:44.734397 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734403 | orchestrator | 2026-02-08 03:43:44.734410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734417 | orchestrator | Sunday 08 February 2026 03:43:36 +0000 (0:00:00.248) 0:01:01.054 ******* 2026-02-08 03:43:44.734424 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-08 03:43:44.734431 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-08 03:43:44.734438 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-08 03:43:44.734445 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-08 03:43:44.734452 | orchestrator | 2026-02-08 03:43:44.734459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734465 | orchestrator | Sunday 08 February 2026 03:43:37 +0000 (0:00:01.003) 0:01:02.058 ******* 2026-02-08 03:43:44.734472 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734479 | orchestrator | 2026-02-08 03:43:44.734485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734492 | orchestrator | Sunday 08 February 2026 03:43:38 +0000 (0:00:00.796) 0:01:02.854 ******* 2026-02-08 03:43:44.734499 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734505 | orchestrator | 2026-02-08 03:43:44.734512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734519 | orchestrator | Sunday 08 February 2026 03:43:38 +0000 (0:00:00.248) 0:01:03.102 ******* 2026-02-08 03:43:44.734525 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734532 | orchestrator | 2026-02-08 03:43:44.734539 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-08 03:43:44.734545 | orchestrator | Sunday 08 February 2026 03:43:39 +0000 (0:00:00.237) 0:01:03.340 ******* 2026-02-08 03:43:44.734552 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734558 | orchestrator | 2026-02-08 03:43:44.734565 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-08 03:43:44.734572 | orchestrator | Sunday 08 February 2026 03:43:39 +0000 (0:00:00.242) 0:01:03.583 ******* 2026-02-08 03:43:44.734578 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734585 | orchestrator | 2026-02-08 03:43:44.734592 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-08 03:43:44.734598 | orchestrator | Sunday 08 February 2026 03:43:39 +0000 (0:00:00.151) 0:01:03.734 ******* 2026-02-08 03:43:44.734605 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ad89cb8-326d-5a7d-8045-6e04c12be05a'}}) 2026-02-08 03:43:44.734613 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b3e05e81-e469-5668-9a53-5e8f92025307'}}) 2026-02-08 03:43:44.734619 | orchestrator | 2026-02-08 03:43:44.734626 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-08 03:43:44.734640 | orchestrator | Sunday 08 February 2026 03:43:39 +0000 (0:00:00.205) 0:01:03.939 ******* 2026-02-08 03:43:44.734649 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'}) 2026-02-08 03:43:44.734662 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'}) 2026-02-08 03:43:44.734672 | orchestrator | 2026-02-08 03:43:44.734682 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-08 03:43:44.734712 | orchestrator | Sunday 08 February 2026 03:43:41 +0000 (0:00:01.833) 0:01:05.773 ******* 2026-02-08 03:43:44.734724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:44.734735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:44.734747 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734759 | orchestrator | 2026-02-08 03:43:44.734770 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-08 03:43:44.734780 | orchestrator | Sunday 08 February 2026 03:43:41 +0000 (0:00:00.172) 0:01:05.945 ******* 2026-02-08 03:43:44.734791 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'}) 2026-02-08 03:43:44.734801 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'}) 2026-02-08 03:43:44.734813 | orchestrator | 2026-02-08 03:43:44.734824 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-08 03:43:44.734843 | orchestrator | Sunday 08 February 2026 03:43:42 +0000 (0:00:01.350) 0:01:07.296 ******* 2026-02-08 03:43:44.734855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:44.734868 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:44.734882 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734894 | orchestrator | 2026-02-08 03:43:44.734901 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-08 03:43:44.734908 | orchestrator | Sunday 08 February 2026 03:43:43 +0000 (0:00:00.178) 0:01:07.474 ******* 2026-02-08 03:43:44.734915 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734921 | orchestrator | 2026-02-08 03:43:44.734928 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-08 03:43:44.734934 | orchestrator | Sunday 08 February 2026 03:43:43 +0000 (0:00:00.146) 0:01:07.621 ******* 2026-02-08 03:43:44.734941 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:44.734948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:44.734954 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734961 | orchestrator | 2026-02-08 03:43:44.734967 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-08 03:43:44.734974 | orchestrator | Sunday 08 February 2026 03:43:43 +0000 (0:00:00.406) 0:01:08.027 ******* 2026-02-08 03:43:44.734981 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.734988 | orchestrator | 2026-02-08 03:43:44.734994 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-08 03:43:44.735021 | orchestrator | Sunday 08 February 2026 03:43:43 +0000 (0:00:00.167) 0:01:08.195 ******* 2026-02-08 03:43:44.735039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:44.735046 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:44.735053 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.735059 | orchestrator | 2026-02-08 03:43:44.735066 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-08 03:43:44.735073 | orchestrator | Sunday 08 February 2026 03:43:44 +0000 (0:00:00.190) 0:01:08.386 ******* 2026-02-08 03:43:44.735079 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.735086 | orchestrator | 2026-02-08 03:43:44.735092 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-08 03:43:44.735099 | orchestrator | Sunday 08 February 2026 03:43:44 +0000 (0:00:00.146) 0:01:08.533 ******* 2026-02-08 03:43:44.735106 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:44.735112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:44.735119 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:44.735126 | orchestrator | 2026-02-08 03:43:44.735132 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-08 03:43:44.735139 | orchestrator | Sunday 08 February 2026 03:43:44 +0000 (0:00:00.171) 0:01:08.704 ******* 2026-02-08 03:43:44.735146 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:43:44.735153 | orchestrator | 2026-02-08 03:43:44.735160 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-08 03:43:44.735166 | orchestrator | Sunday 08 February 2026 03:43:44 +0000 (0:00:00.160) 0:01:08.865 ******* 2026-02-08 03:43:44.735181 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:51.486486 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:51.486590 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.486604 | orchestrator | 2026-02-08 03:43:51.486613 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-08 03:43:51.486623 | orchestrator | Sunday 08 February 2026 03:43:44 +0000 (0:00:00.168) 0:01:09.034 ******* 2026-02-08 03:43:51.486632 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:51.486655 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:51.486672 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.486682 | orchestrator | 2026-02-08 03:43:51.486691 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-08 03:43:51.486700 | orchestrator | Sunday 08 February 2026 03:43:44 +0000 (0:00:00.185) 0:01:09.219 ******* 2026-02-08 03:43:51.486720 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:51.486730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:51.486738 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.486747 | orchestrator | 2026-02-08 03:43:51.486762 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-08 03:43:51.486777 | orchestrator | Sunday 08 February 2026 03:43:45 +0000 (0:00:00.188) 0:01:09.408 ******* 2026-02-08 03:43:51.486818 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.486832 | orchestrator | 2026-02-08 03:43:51.486845 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-08 03:43:51.486858 | orchestrator | Sunday 08 February 2026 03:43:45 +0000 (0:00:00.146) 0:01:09.555 ******* 2026-02-08 03:43:51.486870 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.486883 | orchestrator | 2026-02-08 03:43:51.486896 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-08 03:43:51.486908 | orchestrator | Sunday 08 February 2026 03:43:45 +0000 (0:00:00.146) 0:01:09.701 ******* 2026-02-08 03:43:51.486920 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.486934 | orchestrator | 2026-02-08 03:43:51.486950 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-08 03:43:51.486963 | orchestrator | Sunday 08 February 2026 03:43:45 +0000 (0:00:00.412) 0:01:10.113 ******* 2026-02-08 03:43:51.486975 | orchestrator | ok: [testbed-node-5] => { 2026-02-08 03:43:51.486989 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-08 03:43:51.487003 | orchestrator | } 2026-02-08 03:43:51.487104 | orchestrator | 2026-02-08 03:43:51.487122 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-08 03:43:51.487135 | orchestrator | Sunday 08 February 2026 03:43:45 +0000 (0:00:00.169) 0:01:10.283 ******* 2026-02-08 03:43:51.487145 | orchestrator | ok: [testbed-node-5] => { 2026-02-08 03:43:51.487156 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-08 03:43:51.487166 | orchestrator | } 2026-02-08 03:43:51.487174 | orchestrator | 2026-02-08 03:43:51.487183 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-08 03:43:51.487192 | orchestrator | Sunday 08 February 2026 03:43:46 +0000 (0:00:00.161) 0:01:10.444 ******* 2026-02-08 03:43:51.487200 | orchestrator | ok: [testbed-node-5] => { 2026-02-08 03:43:51.487209 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-08 03:43:51.487217 | orchestrator | } 2026-02-08 03:43:51.487226 | orchestrator | 2026-02-08 03:43:51.487234 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-08 03:43:51.487243 | orchestrator | Sunday 08 February 2026 03:43:46 +0000 (0:00:00.176) 0:01:10.620 ******* 2026-02-08 03:43:51.487251 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:43:51.487260 | orchestrator | 2026-02-08 03:43:51.487268 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-08 03:43:51.487277 | orchestrator | Sunday 08 February 2026 03:43:46 +0000 (0:00:00.559) 0:01:11.180 ******* 2026-02-08 03:43:51.487285 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:43:51.487293 | orchestrator | 2026-02-08 03:43:51.487302 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-08 03:43:51.487310 | orchestrator | Sunday 08 February 2026 03:43:47 +0000 (0:00:00.517) 0:01:11.697 ******* 2026-02-08 03:43:51.487319 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:43:51.487327 | orchestrator | 2026-02-08 03:43:51.487336 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-08 03:43:51.487344 | orchestrator | Sunday 08 February 2026 03:43:47 +0000 (0:00:00.572) 0:01:12.269 ******* 2026-02-08 03:43:51.487355 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:43:51.487370 | orchestrator | 2026-02-08 03:43:51.487383 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-08 03:43:51.487397 | orchestrator | Sunday 08 February 2026 03:43:48 +0000 (0:00:00.163) 0:01:12.433 ******* 2026-02-08 03:43:51.487411 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487426 | orchestrator | 2026-02-08 03:43:51.487440 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-08 03:43:51.487454 | orchestrator | Sunday 08 February 2026 03:43:48 +0000 (0:00:00.143) 0:01:12.576 ******* 2026-02-08 03:43:51.487467 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487480 | orchestrator | 2026-02-08 03:43:51.487493 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-08 03:43:51.487524 | orchestrator | Sunday 08 February 2026 03:43:48 +0000 (0:00:00.123) 0:01:12.699 ******* 2026-02-08 03:43:51.487540 | orchestrator | ok: [testbed-node-5] => { 2026-02-08 03:43:51.487556 | orchestrator |  "vgs_report": { 2026-02-08 03:43:51.487566 | orchestrator |  "vg": [] 2026-02-08 03:43:51.487596 | orchestrator |  } 2026-02-08 03:43:51.487612 | orchestrator | } 2026-02-08 03:43:51.487626 | orchestrator | 2026-02-08 03:43:51.487639 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-08 03:43:51.487653 | orchestrator | Sunday 08 February 2026 03:43:48 +0000 (0:00:00.152) 0:01:12.851 ******* 2026-02-08 03:43:51.487669 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487685 | orchestrator | 2026-02-08 03:43:51.487699 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-08 03:43:51.487712 | orchestrator | Sunday 08 February 2026 03:43:48 +0000 (0:00:00.164) 0:01:13.016 ******* 2026-02-08 03:43:51.487721 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487730 | orchestrator | 2026-02-08 03:43:51.487738 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-08 03:43:51.487747 | orchestrator | Sunday 08 February 2026 03:43:49 +0000 (0:00:00.377) 0:01:13.393 ******* 2026-02-08 03:43:51.487755 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487764 | orchestrator | 2026-02-08 03:43:51.487772 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-08 03:43:51.487781 | orchestrator | Sunday 08 February 2026 03:43:49 +0000 (0:00:00.145) 0:01:13.539 ******* 2026-02-08 03:43:51.487790 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487798 | orchestrator | 2026-02-08 03:43:51.487807 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-08 03:43:51.487823 | orchestrator | Sunday 08 February 2026 03:43:49 +0000 (0:00:00.152) 0:01:13.691 ******* 2026-02-08 03:43:51.487832 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487841 | orchestrator | 2026-02-08 03:43:51.487849 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-08 03:43:51.487858 | orchestrator | Sunday 08 February 2026 03:43:49 +0000 (0:00:00.176) 0:01:13.868 ******* 2026-02-08 03:43:51.487866 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487875 | orchestrator | 2026-02-08 03:43:51.487883 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-08 03:43:51.487892 | orchestrator | Sunday 08 February 2026 03:43:49 +0000 (0:00:00.149) 0:01:14.018 ******* 2026-02-08 03:43:51.487900 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487909 | orchestrator | 2026-02-08 03:43:51.487917 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-08 03:43:51.487926 | orchestrator | Sunday 08 February 2026 03:43:49 +0000 (0:00:00.145) 0:01:14.164 ******* 2026-02-08 03:43:51.487935 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487943 | orchestrator | 2026-02-08 03:43:51.487952 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-08 03:43:51.487960 | orchestrator | Sunday 08 February 2026 03:43:49 +0000 (0:00:00.142) 0:01:14.307 ******* 2026-02-08 03:43:51.487969 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.487978 | orchestrator | 2026-02-08 03:43:51.487986 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-08 03:43:51.487995 | orchestrator | Sunday 08 February 2026 03:43:50 +0000 (0:00:00.132) 0:01:14.440 ******* 2026-02-08 03:43:51.488003 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.488042 | orchestrator | 2026-02-08 03:43:51.488056 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-08 03:43:51.488065 | orchestrator | Sunday 08 February 2026 03:43:50 +0000 (0:00:00.139) 0:01:14.579 ******* 2026-02-08 03:43:51.488073 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.488082 | orchestrator | 2026-02-08 03:43:51.488090 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-08 03:43:51.488099 | orchestrator | Sunday 08 February 2026 03:43:50 +0000 (0:00:00.139) 0:01:14.719 ******* 2026-02-08 03:43:51.488115 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.488123 | orchestrator | 2026-02-08 03:43:51.488132 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-08 03:43:51.488140 | orchestrator | Sunday 08 February 2026 03:43:50 +0000 (0:00:00.158) 0:01:14.878 ******* 2026-02-08 03:43:51.488149 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.488157 | orchestrator | 2026-02-08 03:43:51.488166 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-08 03:43:51.488174 | orchestrator | Sunday 08 February 2026 03:43:50 +0000 (0:00:00.303) 0:01:15.181 ******* 2026-02-08 03:43:51.488183 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.488191 | orchestrator | 2026-02-08 03:43:51.488200 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-08 03:43:51.488208 | orchestrator | Sunday 08 February 2026 03:43:51 +0000 (0:00:00.147) 0:01:15.329 ******* 2026-02-08 03:43:51.488217 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:51.488227 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:51.488235 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.488243 | orchestrator | 2026-02-08 03:43:51.488252 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-08 03:43:51.488261 | orchestrator | Sunday 08 February 2026 03:43:51 +0000 (0:00:00.154) 0:01:15.483 ******* 2026-02-08 03:43:51.488269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:51.488278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:51.488286 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:51.488295 | orchestrator | 2026-02-08 03:43:51.488303 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-08 03:43:51.488312 | orchestrator | Sunday 08 February 2026 03:43:51 +0000 (0:00:00.165) 0:01:15.648 ******* 2026-02-08 03:43:51.488328 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:54.769678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:54.769782 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:54.769799 | orchestrator | 2026-02-08 03:43:54.769813 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-08 03:43:54.769834 | orchestrator | Sunday 08 February 2026 03:43:51 +0000 (0:00:00.139) 0:01:15.787 ******* 2026-02-08 03:43:54.769853 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:54.769873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:54.769892 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:54.769911 | orchestrator | 2026-02-08 03:43:54.769929 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-08 03:43:54.769960 | orchestrator | Sunday 08 February 2026 03:43:51 +0000 (0:00:00.173) 0:01:15.960 ******* 2026-02-08 03:43:54.769971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:54.769982 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:54.770193 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:54.770221 | orchestrator | 2026-02-08 03:43:54.770239 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-08 03:43:54.770257 | orchestrator | Sunday 08 February 2026 03:43:51 +0000 (0:00:00.185) 0:01:16.146 ******* 2026-02-08 03:43:54.770275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:54.770294 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:54.770312 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:54.770332 | orchestrator | 2026-02-08 03:43:54.770351 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-08 03:43:54.770370 | orchestrator | Sunday 08 February 2026 03:43:51 +0000 (0:00:00.160) 0:01:16.306 ******* 2026-02-08 03:43:54.770389 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:54.770403 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:54.770413 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:54.770424 | orchestrator | 2026-02-08 03:43:54.770435 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-08 03:43:54.770446 | orchestrator | Sunday 08 February 2026 03:43:52 +0000 (0:00:00.158) 0:01:16.465 ******* 2026-02-08 03:43:54.770457 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:54.770467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:54.770478 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:54.770489 | orchestrator | 2026-02-08 03:43:54.770499 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-08 03:43:54.770510 | orchestrator | Sunday 08 February 2026 03:43:52 +0000 (0:00:00.167) 0:01:16.633 ******* 2026-02-08 03:43:54.770521 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:43:54.770533 | orchestrator | 2026-02-08 03:43:54.770544 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-08 03:43:54.770554 | orchestrator | Sunday 08 February 2026 03:43:52 +0000 (0:00:00.512) 0:01:17.145 ******* 2026-02-08 03:43:54.770565 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:43:54.770575 | orchestrator | 2026-02-08 03:43:54.770586 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-08 03:43:54.770597 | orchestrator | Sunday 08 February 2026 03:43:53 +0000 (0:00:00.770) 0:01:17.916 ******* 2026-02-08 03:43:54.770607 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:43:54.770618 | orchestrator | 2026-02-08 03:43:54.770629 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-08 03:43:54.770640 | orchestrator | Sunday 08 February 2026 03:43:53 +0000 (0:00:00.173) 0:01:18.090 ******* 2026-02-08 03:43:54.770651 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'vg_name': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'}) 2026-02-08 03:43:54.770663 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'vg_name': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'}) 2026-02-08 03:43:54.770674 | orchestrator | 2026-02-08 03:43:54.770685 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-08 03:43:54.770696 | orchestrator | Sunday 08 February 2026 03:43:53 +0000 (0:00:00.196) 0:01:18.286 ******* 2026-02-08 03:43:54.770728 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:54.770752 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:54.770763 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:54.770774 | orchestrator | 2026-02-08 03:43:54.770785 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-08 03:43:54.770796 | orchestrator | Sunday 08 February 2026 03:43:54 +0000 (0:00:00.211) 0:01:18.498 ******* 2026-02-08 03:43:54.770806 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:54.770826 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:54.770837 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:54.770848 | orchestrator | 2026-02-08 03:43:54.770858 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-08 03:43:54.770869 | orchestrator | Sunday 08 February 2026 03:43:54 +0000 (0:00:00.186) 0:01:18.685 ******* 2026-02-08 03:43:54.770880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 03:43:54.770891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 03:43:54.770902 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:43:54.770912 | orchestrator | 2026-02-08 03:43:54.770923 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-08 03:43:54.770934 | orchestrator | Sunday 08 February 2026 03:43:54 +0000 (0:00:00.191) 0:01:18.876 ******* 2026-02-08 03:43:54.770944 | orchestrator | ok: [testbed-node-5] => { 2026-02-08 03:43:54.770955 | orchestrator |  "lvm_report": { 2026-02-08 03:43:54.770966 | orchestrator |  "lv": [ 2026-02-08 03:43:54.770976 | orchestrator |  { 2026-02-08 03:43:54.770987 | orchestrator |  "lv_name": "osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a", 2026-02-08 03:43:54.770999 | orchestrator |  "vg_name": "ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a" 2026-02-08 03:43:54.771054 | orchestrator |  }, 2026-02-08 03:43:54.771075 | orchestrator |  { 2026-02-08 03:43:54.771094 | orchestrator |  "lv_name": "osd-block-b3e05e81-e469-5668-9a53-5e8f92025307", 2026-02-08 03:43:54.771113 | orchestrator |  "vg_name": "ceph-b3e05e81-e469-5668-9a53-5e8f92025307" 2026-02-08 03:43:54.771127 | orchestrator |  } 2026-02-08 03:43:54.771138 | orchestrator |  ], 2026-02-08 03:43:54.771149 | orchestrator |  "pv": [ 2026-02-08 03:43:54.771159 | orchestrator |  { 2026-02-08 03:43:54.771170 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-08 03:43:54.771181 | orchestrator |  "vg_name": "ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a" 2026-02-08 03:43:54.771192 | orchestrator |  }, 2026-02-08 03:43:54.771202 | orchestrator |  { 2026-02-08 03:43:54.771213 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-08 03:43:54.771224 | orchestrator |  "vg_name": "ceph-b3e05e81-e469-5668-9a53-5e8f92025307" 2026-02-08 03:43:54.771235 | orchestrator |  } 2026-02-08 03:43:54.771245 | orchestrator |  ] 2026-02-08 03:43:54.771256 | orchestrator |  } 2026-02-08 03:43:54.771267 | orchestrator | } 2026-02-08 03:43:54.771278 | orchestrator | 2026-02-08 03:43:54.771288 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:43:54.771299 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-08 03:43:54.771323 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-08 03:43:54.771334 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-08 03:43:54.771345 | orchestrator | 2026-02-08 03:43:54.771356 | orchestrator | 2026-02-08 03:43:54.771366 | orchestrator | 2026-02-08 03:43:54.771377 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:43:54.771388 | orchestrator | Sunday 08 February 2026 03:43:54 +0000 (0:00:00.167) 0:01:19.044 ******* 2026-02-08 03:43:54.771399 | orchestrator | =============================================================================== 2026-02-08 03:43:54.771410 | orchestrator | Create block VGs -------------------------------------------------------- 5.56s 2026-02-08 03:43:54.771420 | orchestrator | Create block LVs -------------------------------------------------------- 4.19s 2026-02-08 03:43:54.771431 | orchestrator | Add known partitions to the list of available block devices ------------- 2.12s 2026-02-08 03:43:54.771442 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.85s 2026-02-08 03:43:54.771452 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.78s 2026-02-08 03:43:54.771463 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.62s 2026-02-08 03:43:54.771474 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.61s 2026-02-08 03:43:54.771484 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2026-02-08 03:43:54.771504 | orchestrator | Add known links to the list of available block devices ------------------ 1.40s 2026-02-08 03:43:55.234651 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-02-08 03:43:55.234747 | orchestrator | Fail if number of OSDs exceeds num_osds for a DB+WAL VG ----------------- 0.94s 2026-02-08 03:43:55.234756 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-02-08 03:43:55.234763 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2026-02-08 03:43:55.234770 | orchestrator | Calculate size needed for LVs on ceph_db_devices ------------------------ 0.91s 2026-02-08 03:43:55.234776 | orchestrator | Print LVM report data --------------------------------------------------- 0.84s 2026-02-08 03:43:55.234783 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.83s 2026-02-08 03:43:55.234789 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2026-02-08 03:43:55.234796 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-02-08 03:43:55.234802 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.77s 2026-02-08 03:43:55.234827 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-02-08 03:44:07.850768 | orchestrator | 2026-02-08 03:44:07 | INFO  | Task f5f49d28-344d-49ea-b5b9-e047a6951ece (facts) was prepared for execution. 2026-02-08 03:44:07.850907 | orchestrator | 2026-02-08 03:44:07 | INFO  | It takes a moment until task f5f49d28-344d-49ea-b5b9-e047a6951ece (facts) has been started and output is visible here. 2026-02-08 03:44:21.701327 | orchestrator | 2026-02-08 03:44:21.701433 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-08 03:44:21.701447 | orchestrator | 2026-02-08 03:44:21.701455 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-08 03:44:21.701462 | orchestrator | Sunday 08 February 2026 03:44:12 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-02-08 03:44:21.701469 | orchestrator | ok: [testbed-manager] 2026-02-08 03:44:21.701477 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:21.701484 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:21.701490 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:21.701497 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:21.701504 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:21.701534 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:21.701541 | orchestrator | 2026-02-08 03:44:21.701547 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-08 03:44:21.701554 | orchestrator | Sunday 08 February 2026 03:44:13 +0000 (0:00:01.242) 0:00:01.534 ******* 2026-02-08 03:44:21.701561 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:44:21.701569 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:21.701575 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:21.701582 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:21.701588 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:21.701595 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:21.701603 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:21.701609 | orchestrator | 2026-02-08 03:44:21.701615 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-08 03:44:21.701622 | orchestrator | 2026-02-08 03:44:21.701628 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-08 03:44:21.701634 | orchestrator | Sunday 08 February 2026 03:44:14 +0000 (0:00:01.440) 0:00:02.974 ******* 2026-02-08 03:44:21.701641 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:21.701647 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:21.701654 | orchestrator | ok: [testbed-manager] 2026-02-08 03:44:21.701660 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:21.701667 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:21.701674 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:21.701680 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:21.701687 | orchestrator | 2026-02-08 03:44:21.701694 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-08 03:44:21.701700 | orchestrator | 2026-02-08 03:44:21.701706 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-08 03:44:21.701712 | orchestrator | Sunday 08 February 2026 03:44:20 +0000 (0:00:05.638) 0:00:08.613 ******* 2026-02-08 03:44:21.701719 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:44:21.701725 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:21.701731 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:21.701738 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:21.701745 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:21.701751 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:21.701757 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:21.701764 | orchestrator | 2026-02-08 03:44:21.701770 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:44:21.701777 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:44:21.701786 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:44:21.701792 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:44:21.701799 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:44:21.701805 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:44:21.701812 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:44:21.701818 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:44:21.701825 | orchestrator | 2026-02-08 03:44:21.701831 | orchestrator | 2026-02-08 03:44:21.701837 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:44:21.701844 | orchestrator | Sunday 08 February 2026 03:44:21 +0000 (0:00:00.616) 0:00:09.230 ******* 2026-02-08 03:44:21.701860 | orchestrator | =============================================================================== 2026-02-08 03:44:21.701867 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.64s 2026-02-08 03:44:21.701874 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.44s 2026-02-08 03:44:21.701881 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-02-08 03:44:21.701887 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-02-08 03:44:24.185404 | orchestrator | 2026-02-08 03:44:24 | INFO  | Task 394520ae-e9ba-4ac9-95c1-923459c88443 (ceph) was prepared for execution. 2026-02-08 03:44:24.185503 | orchestrator | 2026-02-08 03:44:24 | INFO  | It takes a moment until task 394520ae-e9ba-4ac9-95c1-923459c88443 (ceph) has been started and output is visible here. 2026-02-08 03:44:43.818433 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-08 03:44:43.818526 | orchestrator | 2.16.14 2026-02-08 03:44:43.818548 | orchestrator | 2026-02-08 03:44:43.818563 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-08 03:44:43.818578 | orchestrator | 2026-02-08 03:44:43.818592 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 03:44:43.818606 | orchestrator | Sunday 08 February 2026 03:44:29 +0000 (0:00:00.876) 0:00:00.876 ******* 2026-02-08 03:44:43.818620 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:44:43.818634 | orchestrator | 2026-02-08 03:44:43.818647 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 03:44:43.818660 | orchestrator | Sunday 08 February 2026 03:44:31 +0000 (0:00:01.300) 0:00:02.177 ******* 2026-02-08 03:44:43.818673 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:43.818687 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:43.818701 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:43.818714 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:43.818727 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:43.818742 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:43.818755 | orchestrator | 2026-02-08 03:44:43.818769 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 03:44:43.818782 | orchestrator | Sunday 08 February 2026 03:44:32 +0000 (0:00:01.443) 0:00:03.620 ******* 2026-02-08 03:44:43.818795 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:43.818809 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:43.818823 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:43.818836 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:43.818849 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:43.818863 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:43.818876 | orchestrator | 2026-02-08 03:44:43.818890 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 03:44:43.818903 | orchestrator | Sunday 08 February 2026 03:44:33 +0000 (0:00:01.036) 0:00:04.657 ******* 2026-02-08 03:44:43.818917 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:43.818932 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:43.818946 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:43.818959 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:43.818972 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:43.818986 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:43.819001 | orchestrator | 2026-02-08 03:44:43.819017 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 03:44:43.819031 | orchestrator | Sunday 08 February 2026 03:44:34 +0000 (0:00:00.976) 0:00:05.633 ******* 2026-02-08 03:44:43.819041 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:43.819071 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:43.819086 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:43.819099 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:43.819111 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:43.819149 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:43.819162 | orchestrator | 2026-02-08 03:44:43.819177 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 03:44:43.819191 | orchestrator | Sunday 08 February 2026 03:44:35 +0000 (0:00:00.870) 0:00:06.503 ******* 2026-02-08 03:44:43.819203 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:43.819213 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:43.819222 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:43.819230 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:43.819238 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:43.819246 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:43.819253 | orchestrator | 2026-02-08 03:44:43.819261 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 03:44:43.819269 | orchestrator | Sunday 08 February 2026 03:44:36 +0000 (0:00:00.608) 0:00:07.112 ******* 2026-02-08 03:44:43.819277 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:43.819285 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:43.819293 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:43.819300 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:43.819308 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:43.819316 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:43.819323 | orchestrator | 2026-02-08 03:44:43.819331 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 03:44:43.819340 | orchestrator | Sunday 08 February 2026 03:44:36 +0000 (0:00:00.861) 0:00:07.974 ******* 2026-02-08 03:44:43.819355 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:43.819369 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:43.819382 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:43.819396 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:43.819411 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:43.819424 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:43.819437 | orchestrator | 2026-02-08 03:44:43.819445 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 03:44:43.819452 | orchestrator | Sunday 08 February 2026 03:44:37 +0000 (0:00:00.645) 0:00:08.619 ******* 2026-02-08 03:44:43.819458 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:43.819465 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:43.819471 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:43.819478 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:43.819484 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:43.819491 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:43.819497 | orchestrator | 2026-02-08 03:44:43.819504 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 03:44:43.819511 | orchestrator | Sunday 08 February 2026 03:44:38 +0000 (0:00:00.857) 0:00:09.477 ******* 2026-02-08 03:44:43.819517 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 03:44:43.819524 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:44:43.819530 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:44:43.819537 | orchestrator | 2026-02-08 03:44:43.819543 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 03:44:43.819560 | orchestrator | Sunday 08 February 2026 03:44:39 +0000 (0:00:00.715) 0:00:10.193 ******* 2026-02-08 03:44:43.819567 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:43.819573 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:43.819580 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:43.819599 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:43.819608 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:43.819619 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:43.819630 | orchestrator | 2026-02-08 03:44:43.819641 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 03:44:43.819652 | orchestrator | Sunday 08 February 2026 03:44:39 +0000 (0:00:00.778) 0:00:10.972 ******* 2026-02-08 03:44:43.819663 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 03:44:43.819682 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:44:43.819693 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:44:43.819705 | orchestrator | 2026-02-08 03:44:43.819717 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 03:44:43.819729 | orchestrator | Sunday 08 February 2026 03:44:42 +0000 (0:00:02.473) 0:00:13.446 ******* 2026-02-08 03:44:43.819740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 03:44:43.819749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 03:44:43.819756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 03:44:43.819763 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:43.819770 | orchestrator | 2026-02-08 03:44:43.819776 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 03:44:43.819783 | orchestrator | Sunday 08 February 2026 03:44:42 +0000 (0:00:00.413) 0:00:13.859 ******* 2026-02-08 03:44:43.819790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 03:44:43.819799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 03:44:43.819806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 03:44:43.819812 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:43.819819 | orchestrator | 2026-02-08 03:44:43.819826 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 03:44:43.819832 | orchestrator | Sunday 08 February 2026 03:44:43 +0000 (0:00:00.563) 0:00:14.423 ******* 2026-02-08 03:44:43.819840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:43.819849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:43.819856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:43.819863 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:43.819869 | orchestrator | 2026-02-08 03:44:43.819876 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 03:44:43.819883 | orchestrator | Sunday 08 February 2026 03:44:43 +0000 (0:00:00.171) 0:00:14.595 ******* 2026-02-08 03:44:43.819903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 03:44:40.917124', 'end': '2026-02-08 03:44:40.960683', 'delta': '0:00:00.043559', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 03:44:52.723979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 03:44:41.475088', 'end': '2026-02-08 03:44:41.519474', 'delta': '0:00:00.044386', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 03:44:52.724142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 03:44:42.022700', 'end': '2026-02-08 03:44:42.073836', 'delta': '0:00:00.051136', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 03:44:52.724174 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.724197 | orchestrator | 2026-02-08 03:44:52.724254 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 03:44:52.724279 | orchestrator | Sunday 08 February 2026 03:44:43 +0000 (0:00:00.210) 0:00:14.805 ******* 2026-02-08 03:44:52.724298 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:44:52.724318 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:44:52.724336 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:44:52.724355 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:44:52.724374 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:44:52.724393 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:44:52.724414 | orchestrator | 2026-02-08 03:44:52.724434 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 03:44:52.724453 | orchestrator | Sunday 08 February 2026 03:44:44 +0000 (0:00:00.721) 0:00:15.527 ******* 2026-02-08 03:44:52.724473 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:44:52.724492 | orchestrator | 2026-02-08 03:44:52.724511 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 03:44:52.724529 | orchestrator | Sunday 08 February 2026 03:44:45 +0000 (0:00:00.775) 0:00:16.302 ******* 2026-02-08 03:44:52.724548 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.724567 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.724587 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.724606 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.724625 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.724645 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.724665 | orchestrator | 2026-02-08 03:44:52.724683 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 03:44:52.724703 | orchestrator | Sunday 08 February 2026 03:44:46 +0000 (0:00:00.698) 0:00:17.000 ******* 2026-02-08 03:44:52.724758 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.724778 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.724797 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.724811 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.724822 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.724833 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.724844 | orchestrator | 2026-02-08 03:44:52.724855 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 03:44:52.724865 | orchestrator | Sunday 08 February 2026 03:44:46 +0000 (0:00:00.962) 0:00:17.962 ******* 2026-02-08 03:44:52.724876 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.724887 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.724897 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.724908 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.724926 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.724944 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.724962 | orchestrator | 2026-02-08 03:44:52.724980 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 03:44:52.724999 | orchestrator | Sunday 08 February 2026 03:44:47 +0000 (0:00:00.693) 0:00:18.656 ******* 2026-02-08 03:44:52.725018 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.725037 | orchestrator | 2026-02-08 03:44:52.725055 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 03:44:52.725107 | orchestrator | Sunday 08 February 2026 03:44:47 +0000 (0:00:00.119) 0:00:18.775 ******* 2026-02-08 03:44:52.725118 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.725129 | orchestrator | 2026-02-08 03:44:52.725140 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 03:44:52.725172 | orchestrator | Sunday 08 February 2026 03:44:47 +0000 (0:00:00.220) 0:00:18.995 ******* 2026-02-08 03:44:52.725189 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.725200 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.725211 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.725222 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.725233 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.725244 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.725255 | orchestrator | 2026-02-08 03:44:52.725290 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 03:44:52.725310 | orchestrator | Sunday 08 February 2026 03:44:48 +0000 (0:00:00.667) 0:00:19.662 ******* 2026-02-08 03:44:52.725328 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.725347 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.725364 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.725381 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.725399 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.725417 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.725434 | orchestrator | 2026-02-08 03:44:52.725452 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 03:44:52.725470 | orchestrator | Sunday 08 February 2026 03:44:49 +0000 (0:00:00.536) 0:00:20.199 ******* 2026-02-08 03:44:52.725488 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.725507 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.725525 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.725542 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.725560 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.725577 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.725593 | orchestrator | 2026-02-08 03:44:52.725604 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 03:44:52.725615 | orchestrator | Sunday 08 February 2026 03:44:49 +0000 (0:00:00.649) 0:00:20.849 ******* 2026-02-08 03:44:52.725626 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.725637 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.725647 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.725673 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.725692 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.725703 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.725717 | orchestrator | 2026-02-08 03:44:52.725735 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 03:44:52.725752 | orchestrator | Sunday 08 February 2026 03:44:50 +0000 (0:00:00.557) 0:00:21.407 ******* 2026-02-08 03:44:52.725770 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.725789 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.725806 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.725826 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.725845 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.725865 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.725883 | orchestrator | 2026-02-08 03:44:52.725901 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 03:44:52.725919 | orchestrator | Sunday 08 February 2026 03:44:51 +0000 (0:00:00.708) 0:00:22.115 ******* 2026-02-08 03:44:52.725930 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.725941 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.725951 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.725962 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.725972 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.725983 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.725994 | orchestrator | 2026-02-08 03:44:52.726004 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 03:44:52.726107 | orchestrator | Sunday 08 February 2026 03:44:51 +0000 (0:00:00.617) 0:00:22.733 ******* 2026-02-08 03:44:52.726131 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:52.726149 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:52.726166 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:52.726184 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:52.726203 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:52.726220 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:52.726238 | orchestrator | 2026-02-08 03:44:52.726257 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 03:44:52.726322 | orchestrator | Sunday 08 February 2026 03:44:52 +0000 (0:00:00.862) 0:00:23.595 ******* 2026-02-08 03:44:52.726345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.726366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.726416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.856048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.856147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.856156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.856162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.856168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.856173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.856178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.856216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:52.856241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:52.856248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:52.856255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:52.856265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:52.856280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.028777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028860 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:53.028866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.028941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.028952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.028962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.126405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.126503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.126521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.126533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.126544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.126582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.126626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.126644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.126683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.126704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.126725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.126754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.126773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.467070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.467144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.467151 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:53.467158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.467238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.467243 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:53.467247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.467258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.745319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:53.745326 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:53.745336 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:53.745342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:53.745385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:44:54.011243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:54.011369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:44:54.011382 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:44:54.011391 | orchestrator | 2026-02-08 03:44:54.011400 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 03:44:54.011409 | orchestrator | Sunday 08 February 2026 03:44:53 +0000 (0:00:01.133) 0:00:24.729 ******* 2026-02-08 03:44:54.011418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.011445 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.011486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.011496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.011504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.011516 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.011524 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.011537 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366697 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366816 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366855 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366869 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.366878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422417 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422541 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422589 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422616 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422634 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422645 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.422669 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688816 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688855 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688892 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:44:54.688905 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688913 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688925 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688932 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.688945 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797190 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797337 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797367 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797517 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797581 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797634 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797666 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:44:54.797691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797713 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.797774 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920280 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920389 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920408 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920431 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920448 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920511 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920530 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920552 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920569 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920608 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:54.920645 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155332 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155446 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155486 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:44:55.155499 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:44:55.155509 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:44:55.155519 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155546 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155557 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155567 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155576 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155584 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155612 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155622 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:44:55.155640 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:45:07.184512 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:45:07.184641 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:07.184659 | orchestrator | 2026-02-08 03:45:07.184672 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 03:45:07.184686 | orchestrator | Sunday 08 February 2026 03:44:55 +0000 (0:00:01.412) 0:00:26.142 ******* 2026-02-08 03:45:07.184697 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:07.184705 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:07.184711 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:07.184717 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:45:07.184723 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:45:07.184729 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:45:07.184735 | orchestrator | 2026-02-08 03:45:07.184742 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 03:45:07.184748 | orchestrator | Sunday 08 February 2026 03:44:56 +0000 (0:00:01.017) 0:00:27.160 ******* 2026-02-08 03:45:07.184754 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:07.184761 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:07.184767 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:07.184773 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:45:07.184779 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:45:07.184785 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:45:07.184791 | orchestrator | 2026-02-08 03:45:07.184797 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 03:45:07.184803 | orchestrator | Sunday 08 February 2026 03:44:57 +0000 (0:00:00.897) 0:00:28.057 ******* 2026-02-08 03:45:07.184810 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.184816 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:07.184822 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:07.184828 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:07.184834 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:07.184840 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:07.184846 | orchestrator | 2026-02-08 03:45:07.184853 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 03:45:07.184859 | orchestrator | Sunday 08 February 2026 03:44:57 +0000 (0:00:00.583) 0:00:28.641 ******* 2026-02-08 03:45:07.184865 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.184871 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:07.184877 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:07.184884 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:07.184890 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:07.184896 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:07.184902 | orchestrator | 2026-02-08 03:45:07.184909 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 03:45:07.184915 | orchestrator | Sunday 08 February 2026 03:44:58 +0000 (0:00:00.899) 0:00:29.541 ******* 2026-02-08 03:45:07.184921 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.184927 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:07.184933 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:07.184939 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:07.184946 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:07.184952 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:07.184958 | orchestrator | 2026-02-08 03:45:07.184964 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 03:45:07.184970 | orchestrator | Sunday 08 February 2026 03:44:59 +0000 (0:00:00.693) 0:00:30.234 ******* 2026-02-08 03:45:07.184976 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.184982 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:07.184989 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:07.185000 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:07.185007 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:07.185013 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:07.185019 | orchestrator | 2026-02-08 03:45:07.185025 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 03:45:07.185031 | orchestrator | Sunday 08 February 2026 03:45:00 +0000 (0:00:00.950) 0:00:31.185 ******* 2026-02-08 03:45:07.185038 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-08 03:45:07.185045 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-08 03:45:07.185051 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-08 03:45:07.185057 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-08 03:45:07.185063 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-08 03:45:07.185094 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-08 03:45:07.185101 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 03:45:07.185107 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-08 03:45:07.185113 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-08 03:45:07.185119 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-08 03:45:07.185125 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-08 03:45:07.185131 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-08 03:45:07.185137 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-08 03:45:07.185144 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-08 03:45:07.185150 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 03:45:07.185167 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-08 03:45:07.185174 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 03:45:07.185180 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-08 03:45:07.185186 | orchestrator | 2026-02-08 03:45:07.185192 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 03:45:07.185198 | orchestrator | Sunday 08 February 2026 03:45:02 +0000 (0:00:02.003) 0:00:33.188 ******* 2026-02-08 03:45:07.185205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 03:45:07.185212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 03:45:07.185218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 03:45:07.185224 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.185230 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-08 03:45:07.185243 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-08 03:45:07.185249 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-08 03:45:07.185256 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:07.185262 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 03:45:07.185268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 03:45:07.185274 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 03:45:07.185280 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:07.185287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 03:45:07.185293 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 03:45:07.185299 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 03:45:07.185305 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:07.185311 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 03:45:07.185318 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 03:45:07.185324 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 03:45:07.185330 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:07.185336 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 03:45:07.185342 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 03:45:07.185357 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 03:45:07.185363 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:07.185369 | orchestrator | 2026-02-08 03:45:07.185375 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 03:45:07.185381 | orchestrator | Sunday 08 February 2026 03:45:03 +0000 (0:00:01.022) 0:00:34.211 ******* 2026-02-08 03:45:07.185388 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:07.185394 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:07.185400 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:07.185407 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:45:07.185413 | orchestrator | 2026-02-08 03:45:07.185420 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 03:45:07.185427 | orchestrator | Sunday 08 February 2026 03:45:04 +0000 (0:00:01.109) 0:00:35.321 ******* 2026-02-08 03:45:07.185434 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.185440 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:07.185446 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:07.185452 | orchestrator | 2026-02-08 03:45:07.185458 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 03:45:07.185464 | orchestrator | Sunday 08 February 2026 03:45:04 +0000 (0:00:00.370) 0:00:35.691 ******* 2026-02-08 03:45:07.185471 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.185477 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:07.185483 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:07.185489 | orchestrator | 2026-02-08 03:45:07.185495 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 03:45:07.185501 | orchestrator | Sunday 08 February 2026 03:45:05 +0000 (0:00:00.419) 0:00:36.110 ******* 2026-02-08 03:45:07.185507 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.185513 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:07.185520 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:07.185526 | orchestrator | 2026-02-08 03:45:07.185532 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 03:45:07.185538 | orchestrator | Sunday 08 February 2026 03:45:05 +0000 (0:00:00.364) 0:00:36.475 ******* 2026-02-08 03:45:07.185544 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:07.185550 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:07.185557 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:07.185563 | orchestrator | 2026-02-08 03:45:07.185569 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 03:45:07.185575 | orchestrator | Sunday 08 February 2026 03:45:06 +0000 (0:00:00.864) 0:00:37.340 ******* 2026-02-08 03:45:07.185581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:45:07.185587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:45:07.185594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:45:07.185600 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.185606 | orchestrator | 2026-02-08 03:45:07.185613 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 03:45:07.185619 | orchestrator | Sunday 08 February 2026 03:45:06 +0000 (0:00:00.413) 0:00:37.753 ******* 2026-02-08 03:45:07.185625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:45:07.185631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:45:07.185637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:45:07.185643 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:07.185649 | orchestrator | 2026-02-08 03:45:07.185660 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 03:45:27.802566 | orchestrator | Sunday 08 February 2026 03:45:07 +0000 (0:00:00.412) 0:00:38.165 ******* 2026-02-08 03:45:27.802721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:45:27.802760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:45:27.802783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:45:27.802795 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:27.802807 | orchestrator | 2026-02-08 03:45:27.802819 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 03:45:27.802830 | orchestrator | Sunday 08 February 2026 03:45:07 +0000 (0:00:00.424) 0:00:38.590 ******* 2026-02-08 03:45:27.802841 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:27.802853 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:27.802864 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:27.802875 | orchestrator | 2026-02-08 03:45:27.802902 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 03:45:27.802913 | orchestrator | Sunday 08 February 2026 03:45:07 +0000 (0:00:00.370) 0:00:38.960 ******* 2026-02-08 03:45:27.802924 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 03:45:27.802936 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 03:45:27.802947 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 03:45:27.802957 | orchestrator | 2026-02-08 03:45:27.802968 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 03:45:27.802979 | orchestrator | Sunday 08 February 2026 03:45:08 +0000 (0:00:00.818) 0:00:39.779 ******* 2026-02-08 03:45:27.802991 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 03:45:27.803002 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:45:27.803013 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:45:27.803024 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 03:45:27.803035 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 03:45:27.803046 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 03:45:27.803057 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 03:45:27.803068 | orchestrator | 2026-02-08 03:45:27.803079 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 03:45:27.803129 | orchestrator | Sunday 08 February 2026 03:45:10 +0000 (0:00:01.371) 0:00:41.151 ******* 2026-02-08 03:45:27.803144 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 03:45:27.803157 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:45:27.803170 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:45:27.803183 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 03:45:27.803195 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 03:45:27.803208 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 03:45:27.803221 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 03:45:27.803234 | orchestrator | 2026-02-08 03:45:27.803246 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 03:45:27.803258 | orchestrator | Sunday 08 February 2026 03:45:12 +0000 (0:00:02.474) 0:00:43.625 ******* 2026-02-08 03:45:27.803272 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:45:27.803286 | orchestrator | 2026-02-08 03:45:27.803298 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 03:45:27.803312 | orchestrator | Sunday 08 February 2026 03:45:14 +0000 (0:00:01.516) 0:00:45.142 ******* 2026-02-08 03:45:27.803336 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:45:27.803347 | orchestrator | 2026-02-08 03:45:27.803358 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 03:45:27.803368 | orchestrator | Sunday 08 February 2026 03:45:15 +0000 (0:00:01.352) 0:00:46.494 ******* 2026-02-08 03:45:27.803379 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:27.803390 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:27.803401 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:27.803412 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:45:27.803423 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:45:27.803434 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:45:27.803445 | orchestrator | 2026-02-08 03:45:27.803455 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 03:45:27.803466 | orchestrator | Sunday 08 February 2026 03:45:16 +0000 (0:00:01.271) 0:00:47.766 ******* 2026-02-08 03:45:27.803477 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:27.803488 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:27.803499 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:27.803509 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:27.803520 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:27.803531 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:27.803542 | orchestrator | 2026-02-08 03:45:27.803552 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 03:45:27.803563 | orchestrator | Sunday 08 February 2026 03:45:17 +0000 (0:00:00.761) 0:00:48.527 ******* 2026-02-08 03:45:27.803574 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:27.803584 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:27.803595 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:27.803625 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:27.803637 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:27.803648 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:27.803658 | orchestrator | 2026-02-08 03:45:27.803669 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 03:45:27.803680 | orchestrator | Sunday 08 February 2026 03:45:18 +0000 (0:00:00.896) 0:00:49.424 ******* 2026-02-08 03:45:27.803691 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:27.803702 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:27.803713 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:27.803724 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:27.803734 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:27.803745 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:27.803756 | orchestrator | 2026-02-08 03:45:27.803767 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 03:45:27.803778 | orchestrator | Sunday 08 February 2026 03:45:19 +0000 (0:00:00.706) 0:00:50.131 ******* 2026-02-08 03:45:27.803794 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:27.803806 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:27.803816 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:27.803827 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:45:27.803838 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:45:27.803849 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:45:27.803859 | orchestrator | 2026-02-08 03:45:27.803870 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 03:45:27.803881 | orchestrator | Sunday 08 February 2026 03:45:20 +0000 (0:00:01.288) 0:00:51.419 ******* 2026-02-08 03:45:27.803892 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:27.803903 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:27.803963 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:27.803975 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:27.803986 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:27.803997 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:27.804008 | orchestrator | 2026-02-08 03:45:27.804019 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 03:45:27.804038 | orchestrator | Sunday 08 February 2026 03:45:21 +0000 (0:00:00.639) 0:00:52.058 ******* 2026-02-08 03:45:27.804049 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:27.804060 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:27.804071 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:27.804104 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:27.804117 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:27.804128 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:27.804138 | orchestrator | 2026-02-08 03:45:27.804150 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 03:45:27.804160 | orchestrator | Sunday 08 February 2026 03:45:21 +0000 (0:00:00.861) 0:00:52.920 ******* 2026-02-08 03:45:27.804171 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:27.804182 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:27.804193 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:27.804203 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:45:27.804214 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:45:27.804225 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:45:27.804235 | orchestrator | 2026-02-08 03:45:27.804246 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 03:45:27.804257 | orchestrator | Sunday 08 February 2026 03:45:23 +0000 (0:00:01.100) 0:00:54.020 ******* 2026-02-08 03:45:27.804268 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:27.804278 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:27.804289 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:27.804300 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:45:27.804310 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:45:27.804321 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:45:27.804331 | orchestrator | 2026-02-08 03:45:27.804342 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 03:45:27.804353 | orchestrator | Sunday 08 February 2026 03:45:24 +0000 (0:00:01.355) 0:00:55.376 ******* 2026-02-08 03:45:27.804363 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:27.804374 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:27.804385 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:27.804396 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:27.804407 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:27.804417 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:27.804428 | orchestrator | 2026-02-08 03:45:27.804439 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 03:45:27.804450 | orchestrator | Sunday 08 February 2026 03:45:25 +0000 (0:00:00.646) 0:00:56.022 ******* 2026-02-08 03:45:27.804461 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:45:27.804472 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:45:27.804482 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:45:27.804493 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:45:27.804504 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:45:27.804515 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:45:27.804526 | orchestrator | 2026-02-08 03:45:27.804536 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 03:45:27.804550 | orchestrator | Sunday 08 February 2026 03:45:25 +0000 (0:00:00.858) 0:00:56.881 ******* 2026-02-08 03:45:27.804569 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:27.804584 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:27.804594 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:27.804605 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:27.804616 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:27.804627 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:27.804638 | orchestrator | 2026-02-08 03:45:27.804648 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 03:45:27.804659 | orchestrator | Sunday 08 February 2026 03:45:26 +0000 (0:00:00.632) 0:00:57.513 ******* 2026-02-08 03:45:27.804670 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:27.804681 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:27.804699 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:45:27.804710 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:45:27.804721 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:45:27.804732 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:45:27.804743 | orchestrator | 2026-02-08 03:45:27.804753 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 03:45:27.804764 | orchestrator | Sunday 08 February 2026 03:45:27 +0000 (0:00:00.882) 0:00:58.396 ******* 2026-02-08 03:45:27.804775 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:45:27.804786 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:45:27.804805 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:46:43.340370 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:46:43.340489 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:46:43.340510 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:46:43.340535 | orchestrator | 2026-02-08 03:46:43.340557 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 03:46:43.340575 | orchestrator | Sunday 08 February 2026 03:45:28 +0000 (0:00:00.617) 0:00:59.013 ******* 2026-02-08 03:46:43.340590 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:46:43.340605 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:46:43.340622 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:46:43.340638 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:46:43.340654 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:46:43.340671 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:46:43.340687 | orchestrator | 2026-02-08 03:46:43.340704 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 03:46:43.340741 | orchestrator | Sunday 08 February 2026 03:45:28 +0000 (0:00:00.933) 0:00:59.946 ******* 2026-02-08 03:46:43.340753 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:46:43.340763 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:46:43.340773 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:46:43.340782 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:46:43.340792 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:46:43.340802 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:46:43.340812 | orchestrator | 2026-02-08 03:46:43.340822 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 03:46:43.340835 | orchestrator | Sunday 08 February 2026 03:45:29 +0000 (0:00:00.688) 0:01:00.635 ******* 2026-02-08 03:46:43.340851 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:46:43.340866 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:46:43.340880 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:46:43.340896 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:46:43.340914 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:46:43.340930 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:46:43.340947 | orchestrator | 2026-02-08 03:46:43.340963 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 03:46:43.340979 | orchestrator | Sunday 08 February 2026 03:45:30 +0000 (0:00:00.933) 0:01:01.568 ******* 2026-02-08 03:46:43.340997 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:46:43.341014 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:46:43.341033 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:46:43.341051 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:46:43.341070 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:46:43.341081 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:46:43.341092 | orchestrator | 2026-02-08 03:46:43.341104 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 03:46:43.341145 | orchestrator | Sunday 08 February 2026 03:45:31 +0000 (0:00:00.759) 0:01:02.328 ******* 2026-02-08 03:46:43.341158 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:46:43.341170 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:46:43.341181 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:46:43.341192 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:46:43.341203 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:46:43.341215 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:46:43.341226 | orchestrator | 2026-02-08 03:46:43.341261 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 03:46:43.341271 | orchestrator | Sunday 08 February 2026 03:45:32 +0000 (0:00:01.445) 0:01:03.773 ******* 2026-02-08 03:46:43.341288 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:46:43.341306 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:46:43.341332 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:46:43.341349 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:46:43.341364 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:46:43.341381 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:46:43.341397 | orchestrator | 2026-02-08 03:46:43.341415 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 03:46:43.341432 | orchestrator | Sunday 08 February 2026 03:45:34 +0000 (0:00:01.851) 0:01:05.625 ******* 2026-02-08 03:46:43.341449 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:46:43.341466 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:46:43.341485 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:46:43.341501 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:46:43.341520 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:46:43.341539 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:46:43.341557 | orchestrator | 2026-02-08 03:46:43.341570 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 03:46:43.341579 | orchestrator | Sunday 08 February 2026 03:45:36 +0000 (0:00:02.052) 0:01:07.678 ******* 2026-02-08 03:46:43.341591 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:46:43.341603 | orchestrator | 2026-02-08 03:46:43.341612 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 03:46:43.341622 | orchestrator | Sunday 08 February 2026 03:45:38 +0000 (0:00:01.608) 0:01:09.286 ******* 2026-02-08 03:46:43.341632 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:46:43.341641 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:46:43.341651 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:46:43.341660 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:46:43.341669 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:46:43.341679 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:46:43.341688 | orchestrator | 2026-02-08 03:46:43.341698 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 03:46:43.341707 | orchestrator | Sunday 08 February 2026 03:45:38 +0000 (0:00:00.690) 0:01:09.977 ******* 2026-02-08 03:46:43.341717 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:46:43.341726 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:46:43.341736 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:46:43.341745 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:46:43.341755 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:46:43.341764 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:46:43.341774 | orchestrator | 2026-02-08 03:46:43.341783 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 03:46:43.341793 | orchestrator | Sunday 08 February 2026 03:45:39 +0000 (0:00:00.972) 0:01:10.949 ******* 2026-02-08 03:46:43.341823 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 03:46:43.341833 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 03:46:43.341843 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 03:46:43.341852 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 03:46:43.341862 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 03:46:43.341872 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 03:46:43.341882 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 03:46:43.341903 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 03:46:43.341922 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 03:46:43.341932 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 03:46:43.341942 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 03:46:43.341952 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 03:46:43.341961 | orchestrator | 2026-02-08 03:46:43.341971 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 03:46:43.341981 | orchestrator | Sunday 08 February 2026 03:45:41 +0000 (0:00:01.379) 0:01:12.329 ******* 2026-02-08 03:46:43.341990 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:46:43.342000 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:46:43.342009 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:46:43.342086 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:46:43.342096 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:46:43.342106 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:46:43.342192 | orchestrator | 2026-02-08 03:46:43.342205 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 03:46:43.342215 | orchestrator | Sunday 08 February 2026 03:45:42 +0000 (0:00:01.255) 0:01:13.585 ******* 2026-02-08 03:46:43.342224 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:46:43.342234 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:46:43.342243 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:46:43.342253 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:46:43.342262 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:46:43.342272 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:46:43.342281 | orchestrator | 2026-02-08 03:46:43.342291 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 03:46:43.342300 | orchestrator | Sunday 08 February 2026 03:45:43 +0000 (0:00:00.703) 0:01:14.289 ******* 2026-02-08 03:46:43.342310 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:46:43.342319 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:46:43.342329 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:46:43.342338 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:46:43.342348 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:46:43.342357 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:46:43.342366 | orchestrator | 2026-02-08 03:46:43.342376 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 03:46:43.342386 | orchestrator | Sunday 08 February 2026 03:45:44 +0000 (0:00:00.839) 0:01:15.129 ******* 2026-02-08 03:46:43.342395 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:46:43.342405 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:46:43.342414 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:46:43.342424 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:46:43.342433 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:46:43.342443 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:46:43.342452 | orchestrator | 2026-02-08 03:46:43.342462 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 03:46:43.342471 | orchestrator | Sunday 08 February 2026 03:45:44 +0000 (0:00:00.671) 0:01:15.800 ******* 2026-02-08 03:46:43.342481 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:46:43.342491 | orchestrator | 2026-02-08 03:46:43.342500 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 03:46:43.342510 | orchestrator | Sunday 08 February 2026 03:45:45 +0000 (0:00:01.104) 0:01:16.905 ******* 2026-02-08 03:46:43.342520 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:46:43.342529 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:46:43.342539 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:46:43.342557 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:46:43.342568 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:46:43.342585 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:46:43.342609 | orchestrator | 2026-02-08 03:46:43.342627 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 03:46:43.342644 | orchestrator | Sunday 08 February 2026 03:46:42 +0000 (0:00:57.058) 0:02:13.964 ******* 2026-02-08 03:46:43.342660 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 03:46:43.342675 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 03:46:43.342689 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 03:46:43.342704 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:46:43.342719 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 03:46:43.342734 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 03:46:43.342749 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 03:46:43.342762 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:46:43.342774 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 03:46:43.342796 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 03:47:08.282295 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 03:47:08.282449 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.282467 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 03:47:08.282480 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 03:47:08.282490 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 03:47:08.282501 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.282512 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 03:47:08.282544 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 03:47:08.282557 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 03:47:08.282567 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.282578 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 03:47:08.282587 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 03:47:08.282596 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 03:47:08.282603 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.282612 | orchestrator | 2026-02-08 03:47:08.282622 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 03:47:08.282631 | orchestrator | Sunday 08 February 2026 03:46:43 +0000 (0:00:00.763) 0:02:14.727 ******* 2026-02-08 03:47:08.282638 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.282647 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.282656 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.282664 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.282673 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.282682 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.282692 | orchestrator | 2026-02-08 03:47:08.282702 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 03:47:08.282710 | orchestrator | Sunday 08 February 2026 03:46:44 +0000 (0:00:00.919) 0:02:15.646 ******* 2026-02-08 03:47:08.282718 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.282727 | orchestrator | 2026-02-08 03:47:08.282736 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 03:47:08.282746 | orchestrator | Sunday 08 February 2026 03:46:44 +0000 (0:00:00.176) 0:02:15.823 ******* 2026-02-08 03:47:08.282755 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.282793 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.282803 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.282810 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.282818 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.282826 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.282834 | orchestrator | 2026-02-08 03:47:08.282842 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 03:47:08.282850 | orchestrator | Sunday 08 February 2026 03:46:45 +0000 (0:00:00.682) 0:02:16.505 ******* 2026-02-08 03:47:08.282862 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.282874 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.282883 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.282891 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.282904 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.282915 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.282924 | orchestrator | 2026-02-08 03:47:08.282932 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 03:47:08.282941 | orchestrator | Sunday 08 February 2026 03:46:46 +0000 (0:00:00.931) 0:02:17.436 ******* 2026-02-08 03:47:08.282949 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.282958 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.282967 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.282975 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.282984 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.282993 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.283005 | orchestrator | 2026-02-08 03:47:08.283015 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 03:47:08.283024 | orchestrator | Sunday 08 February 2026 03:46:47 +0000 (0:00:00.697) 0:02:18.134 ******* 2026-02-08 03:47:08.283032 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:08.283043 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:08.283051 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:08.283059 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:47:08.283068 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:47:08.283076 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:47:08.283084 | orchestrator | 2026-02-08 03:47:08.283094 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 03:47:08.283103 | orchestrator | Sunday 08 February 2026 03:46:50 +0000 (0:00:03.520) 0:02:21.654 ******* 2026-02-08 03:47:08.283111 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:08.283120 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:08.283157 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:08.283165 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:47:08.283174 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:47:08.283182 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:47:08.283190 | orchestrator | 2026-02-08 03:47:08.283200 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 03:47:08.283209 | orchestrator | Sunday 08 February 2026 03:46:51 +0000 (0:00:00.624) 0:02:22.279 ******* 2026-02-08 03:47:08.283220 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:47:08.283232 | orchestrator | 2026-02-08 03:47:08.283241 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 03:47:08.283249 | orchestrator | Sunday 08 February 2026 03:46:52 +0000 (0:00:01.432) 0:02:23.711 ******* 2026-02-08 03:47:08.283259 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.283268 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.283277 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.283315 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.283325 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.283334 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.283342 | orchestrator | 2026-02-08 03:47:08.283351 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 03:47:08.283373 | orchestrator | Sunday 08 February 2026 03:46:53 +0000 (0:00:00.903) 0:02:24.614 ******* 2026-02-08 03:47:08.283379 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.283385 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.283390 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.283395 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.283401 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.283406 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.283411 | orchestrator | 2026-02-08 03:47:08.283417 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 03:47:08.283432 | orchestrator | Sunday 08 February 2026 03:46:54 +0000 (0:00:00.679) 0:02:25.293 ******* 2026-02-08 03:47:08.283437 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.283443 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.283448 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.283454 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.283459 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.283464 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.283470 | orchestrator | 2026-02-08 03:47:08.283475 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 03:47:08.283481 | orchestrator | Sunday 08 February 2026 03:46:55 +0000 (0:00:00.904) 0:02:26.198 ******* 2026-02-08 03:47:08.283486 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.283491 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.283497 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.283502 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.283508 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.283513 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.283518 | orchestrator | 2026-02-08 03:47:08.283524 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 03:47:08.283529 | orchestrator | Sunday 08 February 2026 03:46:55 +0000 (0:00:00.654) 0:02:26.853 ******* 2026-02-08 03:47:08.283534 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.283540 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.283545 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.283551 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.283556 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.283561 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.283567 | orchestrator | 2026-02-08 03:47:08.283575 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 03:47:08.283584 | orchestrator | Sunday 08 February 2026 03:46:56 +0000 (0:00:00.960) 0:02:27.813 ******* 2026-02-08 03:47:08.283592 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.283601 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.283609 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.283618 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.283626 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.283634 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.283643 | orchestrator | 2026-02-08 03:47:08.283652 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 03:47:08.283660 | orchestrator | Sunday 08 February 2026 03:46:57 +0000 (0:00:00.686) 0:02:28.500 ******* 2026-02-08 03:47:08.283670 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.283678 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.283687 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.283696 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.283705 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.283713 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.283721 | orchestrator | 2026-02-08 03:47:08.283729 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 03:47:08.283737 | orchestrator | Sunday 08 February 2026 03:46:58 +0000 (0:00:00.940) 0:02:29.441 ******* 2026-02-08 03:47:08.283754 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:08.283762 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:08.283770 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:08.283778 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:08.283786 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:08.283795 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:08.283803 | orchestrator | 2026-02-08 03:47:08.283811 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 03:47:08.283820 | orchestrator | Sunday 08 February 2026 03:46:59 +0000 (0:00:00.656) 0:02:30.097 ******* 2026-02-08 03:47:08.283829 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:08.283838 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:08.283847 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:08.283856 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:47:08.283865 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:47:08.283874 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:47:08.283883 | orchestrator | 2026-02-08 03:47:08.283894 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 03:47:08.283900 | orchestrator | Sunday 08 February 2026 03:47:00 +0000 (0:00:01.340) 0:02:31.438 ******* 2026-02-08 03:47:08.283909 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:47:08.283918 | orchestrator | 2026-02-08 03:47:08.283924 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 03:47:08.283930 | orchestrator | Sunday 08 February 2026 03:47:01 +0000 (0:00:01.413) 0:02:32.851 ******* 2026-02-08 03:47:08.283937 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-08 03:47:08.283944 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-08 03:47:08.283950 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-08 03:47:08.283957 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-08 03:47:08.283966 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-08 03:47:08.283975 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-08 03:47:08.283996 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-08 03:47:12.222877 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-08 03:47:12.223004 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-08 03:47:12.223020 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-08 03:47:12.223031 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-08 03:47:12.223043 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-08 03:47:12.223055 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-08 03:47:12.223066 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-08 03:47:12.223116 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-08 03:47:12.223206 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-08 03:47:12.223237 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-08 03:47:12.223249 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-08 03:47:12.223260 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-08 03:47:12.223271 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-08 03:47:12.223281 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-08 03:47:12.223297 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-08 03:47:12.223315 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-08 03:47:12.223334 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-08 03:47:12.223352 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-08 03:47:12.223372 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-08 03:47:12.223418 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-08 03:47:12.223432 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-08 03:47:12.223445 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-08 03:47:12.223458 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-08 03:47:12.223472 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-08 03:47:12.223485 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-08 03:47:12.223498 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-08 03:47:12.223510 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-08 03:47:12.223524 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-08 03:47:12.223536 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-08 03:47:12.223549 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-08 03:47:12.223561 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-08 03:47:12.223575 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-08 03:47:12.223587 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-08 03:47:12.223601 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 03:47:12.223614 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-08 03:47:12.223627 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-08 03:47:12.223640 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-08 03:47:12.223653 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-08 03:47:12.223666 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 03:47:12.223683 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-08 03:47:12.223705 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 03:47:12.223733 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-08 03:47:12.223751 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-08 03:47:12.223769 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 03:47:12.223786 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 03:47:12.223806 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 03:47:12.223825 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-08 03:47:12.223843 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 03:47:12.223861 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 03:47:12.223872 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 03:47:12.223883 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 03:47:12.223894 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 03:47:12.223906 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 03:47:12.223916 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 03:47:12.223927 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 03:47:12.223938 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 03:47:12.223949 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 03:47:12.223959 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 03:47:12.223970 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 03:47:12.224000 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 03:47:12.224012 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 03:47:12.224032 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 03:47:12.224043 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 03:47:12.224054 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 03:47:12.224065 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 03:47:12.224076 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 03:47:12.224087 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 03:47:12.224105 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 03:47:12.224117 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-08 03:47:12.224159 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 03:47:12.224172 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 03:47:12.224182 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 03:47:12.224193 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 03:47:12.224204 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-08 03:47:12.224215 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-08 03:47:12.224226 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 03:47:12.224237 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 03:47:12.224248 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-08 03:47:12.224259 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-08 03:47:12.224270 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 03:47:12.224281 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 03:47:12.224292 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 03:47:12.224303 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-08 03:47:12.224313 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-08 03:47:12.224324 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-08 03:47:12.224335 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-08 03:47:12.224346 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-08 03:47:12.224356 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-08 03:47:12.224367 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-08 03:47:12.224378 | orchestrator | 2026-02-08 03:47:12.224390 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 03:47:12.224401 | orchestrator | Sunday 08 February 2026 03:47:08 +0000 (0:00:06.415) 0:02:39.266 ******* 2026-02-08 03:47:12.224412 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:12.224423 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:12.224434 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:12.224445 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:47:12.224457 | orchestrator | 2026-02-08 03:47:12.224468 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-08 03:47:12.224479 | orchestrator | Sunday 08 February 2026 03:47:09 +0000 (0:00:01.087) 0:02:40.354 ******* 2026-02-08 03:47:12.224490 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 03:47:12.224501 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 03:47:12.224512 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 03:47:12.224528 | orchestrator | 2026-02-08 03:47:12.224539 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-08 03:47:12.224550 | orchestrator | Sunday 08 February 2026 03:47:10 +0000 (0:00:00.699) 0:02:41.053 ******* 2026-02-08 03:47:12.224561 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 03:47:12.224572 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 03:47:12.224583 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 03:47:12.224594 | orchestrator | 2026-02-08 03:47:12.224605 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 03:47:12.224616 | orchestrator | Sunday 08 February 2026 03:47:11 +0000 (0:00:01.237) 0:02:42.291 ******* 2026-02-08 03:47:12.224627 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:12.224638 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:12.224649 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:12.224660 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:12.224671 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:12.224682 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:12.224693 | orchestrator | 2026-02-08 03:47:12.224704 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 03:47:12.224722 | orchestrator | Sunday 08 February 2026 03:47:12 +0000 (0:00:00.917) 0:02:43.209 ******* 2026-02-08 03:47:26.348578 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:26.348667 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:26.348677 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:26.348684 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.348692 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.348700 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.348707 | orchestrator | 2026-02-08 03:47:26.348715 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 03:47:26.348723 | orchestrator | Sunday 08 February 2026 03:47:12 +0000 (0:00:00.659) 0:02:43.868 ******* 2026-02-08 03:47:26.348730 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.348737 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.348758 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.348765 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.348771 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.348778 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.348785 | orchestrator | 2026-02-08 03:47:26.348792 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 03:47:26.348799 | orchestrator | Sunday 08 February 2026 03:47:13 +0000 (0:00:00.912) 0:02:44.781 ******* 2026-02-08 03:47:26.348806 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.348812 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.348819 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.348826 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.348832 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.348839 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.348845 | orchestrator | 2026-02-08 03:47:26.348852 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 03:47:26.348859 | orchestrator | Sunday 08 February 2026 03:47:14 +0000 (0:00:00.616) 0:02:45.397 ******* 2026-02-08 03:47:26.348865 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.348872 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.348879 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.348885 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.348892 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.348898 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.348923 | orchestrator | 2026-02-08 03:47:26.348930 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 03:47:26.348938 | orchestrator | Sunday 08 February 2026 03:47:15 +0000 (0:00:00.902) 0:02:46.300 ******* 2026-02-08 03:47:26.348945 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.348952 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.348958 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.348965 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.348971 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.348978 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.348984 | orchestrator | 2026-02-08 03:47:26.348991 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 03:47:26.348998 | orchestrator | Sunday 08 February 2026 03:47:15 +0000 (0:00:00.627) 0:02:46.927 ******* 2026-02-08 03:47:26.349004 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.349011 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.349018 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.349024 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349031 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349037 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349044 | orchestrator | 2026-02-08 03:47:26.349050 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 03:47:26.349057 | orchestrator | Sunday 08 February 2026 03:47:16 +0000 (0:00:00.860) 0:02:47.788 ******* 2026-02-08 03:47:26.349064 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.349070 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.349077 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.349083 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349090 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349096 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349103 | orchestrator | 2026-02-08 03:47:26.349110 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 03:47:26.349116 | orchestrator | Sunday 08 February 2026 03:47:17 +0000 (0:00:00.645) 0:02:48.434 ******* 2026-02-08 03:47:26.349123 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349152 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349161 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349168 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:26.349177 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:26.349185 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:26.349193 | orchestrator | 2026-02-08 03:47:26.349201 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 03:47:26.349207 | orchestrator | Sunday 08 February 2026 03:47:20 +0000 (0:00:02.770) 0:02:51.205 ******* 2026-02-08 03:47:26.349214 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:26.349221 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:26.349227 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:26.349233 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349240 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349247 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349253 | orchestrator | 2026-02-08 03:47:26.349260 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 03:47:26.349267 | orchestrator | Sunday 08 February 2026 03:47:20 +0000 (0:00:00.644) 0:02:51.849 ******* 2026-02-08 03:47:26.349273 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:26.349280 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:26.349286 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:26.349293 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349299 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349306 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349312 | orchestrator | 2026-02-08 03:47:26.349319 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 03:47:26.349332 | orchestrator | Sunday 08 February 2026 03:47:21 +0000 (0:00:00.971) 0:02:52.821 ******* 2026-02-08 03:47:26.349339 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.349345 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.349352 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.349358 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349377 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349384 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349391 | orchestrator | 2026-02-08 03:47:26.349397 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 03:47:26.349404 | orchestrator | Sunday 08 February 2026 03:47:22 +0000 (0:00:00.640) 0:02:53.461 ******* 2026-02-08 03:47:26.349411 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 03:47:26.349424 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 03:47:26.349431 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 03:47:26.349437 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349444 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349451 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349457 | orchestrator | 2026-02-08 03:47:26.349464 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 03:47:26.349471 | orchestrator | Sunday 08 February 2026 03:47:23 +0000 (0:00:00.981) 0:02:54.443 ******* 2026-02-08 03:47:26.349479 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-08 03:47:26.349489 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-08 03:47:26.349497 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.349504 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-08 03:47:26.349511 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-08 03:47:26.349517 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.349524 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-08 03:47:26.349531 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-08 03:47:26.349538 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.349545 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349551 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349563 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349570 | orchestrator | 2026-02-08 03:47:26.349577 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 03:47:26.349584 | orchestrator | Sunday 08 February 2026 03:47:24 +0000 (0:00:00.714) 0:02:55.158 ******* 2026-02-08 03:47:26.349590 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.349597 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.349603 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.349610 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349616 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349623 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349630 | orchestrator | 2026-02-08 03:47:26.349636 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 03:47:26.349643 | orchestrator | Sunday 08 February 2026 03:47:25 +0000 (0:00:00.913) 0:02:56.072 ******* 2026-02-08 03:47:26.349650 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.349656 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:26.349663 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:26.349669 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:26.349678 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:26.349689 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:26.349700 | orchestrator | 2026-02-08 03:47:26.349711 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 03:47:26.349725 | orchestrator | Sunday 08 February 2026 03:47:25 +0000 (0:00:00.647) 0:02:56.719 ******* 2026-02-08 03:47:26.349741 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:26.349757 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:45.178739 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:45.178857 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:45.178879 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:45.178898 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:45.178916 | orchestrator | 2026-02-08 03:47:45.178934 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 03:47:45.178989 | orchestrator | Sunday 08 February 2026 03:47:26 +0000 (0:00:00.948) 0:02:57.668 ******* 2026-02-08 03:47:45.179011 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.179030 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:45.179050 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:45.179069 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:45.179105 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:45.179116 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:45.179127 | orchestrator | 2026-02-08 03:47:45.179192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 03:47:45.179206 | orchestrator | Sunday 08 February 2026 03:47:27 +0000 (0:00:00.640) 0:02:58.309 ******* 2026-02-08 03:47:45.179217 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.179227 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:45.179238 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:45.179249 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:45.179259 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:45.179270 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:45.179283 | orchestrator | 2026-02-08 03:47:45.179295 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 03:47:45.179309 | orchestrator | Sunday 08 February 2026 03:47:28 +0000 (0:00:00.953) 0:02:59.263 ******* 2026-02-08 03:47:45.179320 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:45.179334 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:45.179347 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:45.179359 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:45.179372 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:45.179384 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:45.179396 | orchestrator | 2026-02-08 03:47:45.179434 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 03:47:45.179445 | orchestrator | Sunday 08 February 2026 03:47:29 +0000 (0:00:00.894) 0:03:00.158 ******* 2026-02-08 03:47:45.179456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:47:45.179467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:47:45.179477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:47:45.179488 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.179498 | orchestrator | 2026-02-08 03:47:45.179509 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 03:47:45.179519 | orchestrator | Sunday 08 February 2026 03:47:29 +0000 (0:00:00.482) 0:03:00.640 ******* 2026-02-08 03:47:45.179531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:47:45.179541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:47:45.179552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:47:45.179562 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.179573 | orchestrator | 2026-02-08 03:47:45.179584 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 03:47:45.179594 | orchestrator | Sunday 08 February 2026 03:47:30 +0000 (0:00:00.462) 0:03:01.102 ******* 2026-02-08 03:47:45.179605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:47:45.179615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:47:45.179626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:47:45.179636 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.179647 | orchestrator | 2026-02-08 03:47:45.179657 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 03:47:45.179668 | orchestrator | Sunday 08 February 2026 03:47:30 +0000 (0:00:00.434) 0:03:01.537 ******* 2026-02-08 03:47:45.179679 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:47:45.179689 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:47:45.179700 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:47:45.179710 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:45.179721 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:45.179731 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:45.179742 | orchestrator | 2026-02-08 03:47:45.179753 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 03:47:45.179763 | orchestrator | Sunday 08 February 2026 03:47:31 +0000 (0:00:00.650) 0:03:02.187 ******* 2026-02-08 03:47:45.179774 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 03:47:45.179785 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 03:47:45.179795 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 03:47:45.179806 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-08 03:47:45.179816 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:45.179827 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-08 03:47:45.179837 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:45.179847 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-08 03:47:45.179858 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:45.179869 | orchestrator | 2026-02-08 03:47:45.179879 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 03:47:45.179890 | orchestrator | Sunday 08 February 2026 03:47:33 +0000 (0:00:02.025) 0:03:04.213 ******* 2026-02-08 03:47:45.179901 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:47:45.179911 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:47:45.179921 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:47:45.179932 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:47:45.179942 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:47:45.179953 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:47:45.179963 | orchestrator | 2026-02-08 03:47:45.179974 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 03:47:45.179992 | orchestrator | Sunday 08 February 2026 03:47:36 +0000 (0:00:02.903) 0:03:07.116 ******* 2026-02-08 03:47:45.180003 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:47:45.180014 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:47:45.180024 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:47:45.180054 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:47:45.180066 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:47:45.180076 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:47:45.180087 | orchestrator | 2026-02-08 03:47:45.180098 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-08 03:47:45.180109 | orchestrator | Sunday 08 February 2026 03:47:37 +0000 (0:00:01.000) 0:03:08.116 ******* 2026-02-08 03:47:45.180120 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.180131 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:45.180164 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:45.180182 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:47:45.180194 | orchestrator | 2026-02-08 03:47:45.180205 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-08 03:47:45.180215 | orchestrator | Sunday 08 February 2026 03:47:38 +0000 (0:00:01.180) 0:03:09.297 ******* 2026-02-08 03:47:45.180226 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:47:45.180237 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:47:45.180247 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:47:45.180258 | orchestrator | 2026-02-08 03:47:45.180269 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-08 03:47:45.180280 | orchestrator | Sunday 08 February 2026 03:47:38 +0000 (0:00:00.343) 0:03:09.641 ******* 2026-02-08 03:47:45.180290 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:47:45.180301 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:47:45.180312 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:47:45.180322 | orchestrator | 2026-02-08 03:47:45.180333 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-08 03:47:45.180344 | orchestrator | Sunday 08 February 2026 03:47:40 +0000 (0:00:01.467) 0:03:11.108 ******* 2026-02-08 03:47:45.180354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 03:47:45.180365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 03:47:45.180376 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 03:47:45.180386 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:45.180397 | orchestrator | 2026-02-08 03:47:45.180408 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-08 03:47:45.180418 | orchestrator | Sunday 08 February 2026 03:47:40 +0000 (0:00:00.676) 0:03:11.784 ******* 2026-02-08 03:47:45.180429 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:47:45.180440 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:47:45.180450 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:47:45.180461 | orchestrator | 2026-02-08 03:47:45.180472 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-08 03:47:45.180482 | orchestrator | Sunday 08 February 2026 03:47:41 +0000 (0:00:00.364) 0:03:12.149 ******* 2026-02-08 03:47:45.180493 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:47:45.180504 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:47:45.180515 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:47:45.180526 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:47:45.180536 | orchestrator | 2026-02-08 03:47:45.180547 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-08 03:47:45.180558 | orchestrator | Sunday 08 February 2026 03:47:42 +0000 (0:00:01.163) 0:03:13.313 ******* 2026-02-08 03:47:45.180569 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:47:45.180579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:47:45.180590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:47:45.180609 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.180620 | orchestrator | 2026-02-08 03:47:45.180631 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-08 03:47:45.180641 | orchestrator | Sunday 08 February 2026 03:47:42 +0000 (0:00:00.439) 0:03:13.753 ******* 2026-02-08 03:47:45.180652 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.180663 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:45.180673 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:45.180684 | orchestrator | 2026-02-08 03:47:45.180695 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-08 03:47:45.180705 | orchestrator | Sunday 08 February 2026 03:47:43 +0000 (0:00:00.351) 0:03:14.105 ******* 2026-02-08 03:47:45.180716 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.180727 | orchestrator | 2026-02-08 03:47:45.180737 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-08 03:47:45.180748 | orchestrator | Sunday 08 February 2026 03:47:43 +0000 (0:00:00.253) 0:03:14.358 ******* 2026-02-08 03:47:45.180759 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.180770 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:47:45.180780 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:47:45.180791 | orchestrator | 2026-02-08 03:47:45.180802 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-08 03:47:45.180812 | orchestrator | Sunday 08 February 2026 03:47:43 +0000 (0:00:00.355) 0:03:14.713 ******* 2026-02-08 03:47:45.180823 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.180833 | orchestrator | 2026-02-08 03:47:45.180844 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-08 03:47:45.180854 | orchestrator | Sunday 08 February 2026 03:47:44 +0000 (0:00:00.743) 0:03:15.457 ******* 2026-02-08 03:47:45.180865 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.180876 | orchestrator | 2026-02-08 03:47:45.180886 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-08 03:47:45.180897 | orchestrator | Sunday 08 February 2026 03:47:44 +0000 (0:00:00.302) 0:03:15.760 ******* 2026-02-08 03:47:45.180908 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.180918 | orchestrator | 2026-02-08 03:47:45.180929 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-08 03:47:45.180940 | orchestrator | Sunday 08 February 2026 03:47:44 +0000 (0:00:00.139) 0:03:15.899 ******* 2026-02-08 03:47:45.180950 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:47:45.180961 | orchestrator | 2026-02-08 03:47:45.180979 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-08 03:48:04.004227 | orchestrator | Sunday 08 February 2026 03:47:45 +0000 (0:00:00.261) 0:03:16.161 ******* 2026-02-08 03:48:04.004359 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:48:04.004385 | orchestrator | 2026-02-08 03:48:04.004404 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-08 03:48:04.004424 | orchestrator | Sunday 08 February 2026 03:47:45 +0000 (0:00:00.264) 0:03:16.426 ******* 2026-02-08 03:48:04.004443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:48:04.004463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:48:04.004500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:48:04.004520 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:48:04.004531 | orchestrator | 2026-02-08 03:48:04.004541 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-08 03:48:04.004552 | orchestrator | Sunday 08 February 2026 03:47:45 +0000 (0:00:00.412) 0:03:16.838 ******* 2026-02-08 03:48:04.004569 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:48:04.004585 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:48:04.004605 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:48:04.004622 | orchestrator | 2026-02-08 03:48:04.004638 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-08 03:48:04.004689 | orchestrator | Sunday 08 February 2026 03:47:46 +0000 (0:00:00.342) 0:03:17.180 ******* 2026-02-08 03:48:04.004706 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:48:04.004722 | orchestrator | 2026-02-08 03:48:04.004734 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-08 03:48:04.004746 | orchestrator | Sunday 08 February 2026 03:47:46 +0000 (0:00:00.236) 0:03:17.416 ******* 2026-02-08 03:48:04.004758 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:48:04.004769 | orchestrator | 2026-02-08 03:48:04.004781 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-08 03:48:04.004793 | orchestrator | Sunday 08 February 2026 03:47:46 +0000 (0:00:00.258) 0:03:17.675 ******* 2026-02-08 03:48:04.004805 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:04.004816 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:04.004828 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:04.004841 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:48:04.004853 | orchestrator | 2026-02-08 03:48:04.004864 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-08 03:48:04.004876 | orchestrator | Sunday 08 February 2026 03:47:47 +0000 (0:00:01.160) 0:03:18.835 ******* 2026-02-08 03:48:04.004887 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:48:04.004898 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:48:04.004907 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:48:04.004917 | orchestrator | 2026-02-08 03:48:04.004926 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-08 03:48:04.004937 | orchestrator | Sunday 08 February 2026 03:47:48 +0000 (0:00:00.330) 0:03:19.165 ******* 2026-02-08 03:48:04.004946 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:48:04.004956 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:48:04.004966 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:48:04.004975 | orchestrator | 2026-02-08 03:48:04.004985 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-08 03:48:04.004994 | orchestrator | Sunday 08 February 2026 03:47:49 +0000 (0:00:01.473) 0:03:20.639 ******* 2026-02-08 03:48:04.005004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:48:04.005014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:48:04.005023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:48:04.005033 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:48:04.005042 | orchestrator | 2026-02-08 03:48:04.005052 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-08 03:48:04.005062 | orchestrator | Sunday 08 February 2026 03:47:50 +0000 (0:00:00.746) 0:03:21.386 ******* 2026-02-08 03:48:04.005071 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:48:04.005081 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:48:04.005090 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:48:04.005100 | orchestrator | 2026-02-08 03:48:04.005109 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-08 03:48:04.005119 | orchestrator | Sunday 08 February 2026 03:47:50 +0000 (0:00:00.360) 0:03:21.746 ******* 2026-02-08 03:48:04.005128 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:04.005138 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:04.005195 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:04.005207 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:48:04.005217 | orchestrator | 2026-02-08 03:48:04.005227 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-08 03:48:04.005236 | orchestrator | Sunday 08 February 2026 03:47:51 +0000 (0:00:01.076) 0:03:22.823 ******* 2026-02-08 03:48:04.005246 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:48:04.005255 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:48:04.005265 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:48:04.005274 | orchestrator | 2026-02-08 03:48:04.005292 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-08 03:48:04.005301 | orchestrator | Sunday 08 February 2026 03:47:52 +0000 (0:00:00.347) 0:03:23.170 ******* 2026-02-08 03:48:04.005311 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:48:04.005320 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:48:04.005330 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:48:04.005339 | orchestrator | 2026-02-08 03:48:04.005349 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-08 03:48:04.005358 | orchestrator | Sunday 08 February 2026 03:47:53 +0000 (0:00:01.228) 0:03:24.399 ******* 2026-02-08 03:48:04.005368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:48:04.005378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:48:04.005387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:48:04.005415 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:48:04.005426 | orchestrator | 2026-02-08 03:48:04.005435 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-08 03:48:04.005445 | orchestrator | Sunday 08 February 2026 03:47:54 +0000 (0:00:00.935) 0:03:25.334 ******* 2026-02-08 03:48:04.005455 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:48:04.005464 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:48:04.005474 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:48:04.005483 | orchestrator | 2026-02-08 03:48:04.005493 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-08 03:48:04.005503 | orchestrator | Sunday 08 February 2026 03:47:54 +0000 (0:00:00.625) 0:03:25.959 ******* 2026-02-08 03:48:04.005520 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:48:04.005529 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:48:04.005539 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:48:04.005549 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:04.005558 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:04.005568 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:04.005578 | orchestrator | 2026-02-08 03:48:04.005587 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-08 03:48:04.005597 | orchestrator | Sunday 08 February 2026 03:47:55 +0000 (0:00:00.713) 0:03:26.672 ******* 2026-02-08 03:48:04.005607 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:48:04.005616 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:48:04.005626 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:48:04.005635 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:48:04.005645 | orchestrator | 2026-02-08 03:48:04.005655 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-08 03:48:04.005667 | orchestrator | Sunday 08 February 2026 03:47:56 +0000 (0:00:01.167) 0:03:27.840 ******* 2026-02-08 03:48:04.005683 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:04.005698 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:04.005712 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:04.005728 | orchestrator | 2026-02-08 03:48:04.005744 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-08 03:48:04.005760 | orchestrator | Sunday 08 February 2026 03:47:57 +0000 (0:00:00.373) 0:03:28.213 ******* 2026-02-08 03:48:04.005773 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:48:04.005787 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:48:04.005802 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:48:04.005818 | orchestrator | 2026-02-08 03:48:04.005833 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-08 03:48:04.005850 | orchestrator | Sunday 08 February 2026 03:47:58 +0000 (0:00:01.216) 0:03:29.430 ******* 2026-02-08 03:48:04.005865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 03:48:04.005883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 03:48:04.005899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 03:48:04.005928 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:04.005946 | orchestrator | 2026-02-08 03:48:04.005967 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-08 03:48:04.005991 | orchestrator | Sunday 08 February 2026 03:47:59 +0000 (0:00:01.063) 0:03:30.494 ******* 2026-02-08 03:48:04.006007 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:04.006105 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:04.006124 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:04.006141 | orchestrator | 2026-02-08 03:48:04.006194 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-08 03:48:04.006204 | orchestrator | 2026-02-08 03:48:04.006214 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 03:48:04.006224 | orchestrator | Sunday 08 February 2026 03:48:00 +0000 (0:00:00.944) 0:03:31.438 ******* 2026-02-08 03:48:04.006234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:48:04.006246 | orchestrator | 2026-02-08 03:48:04.006256 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 03:48:04.006266 | orchestrator | Sunday 08 February 2026 03:48:01 +0000 (0:00:00.872) 0:03:32.310 ******* 2026-02-08 03:48:04.006276 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:48:04.006286 | orchestrator | 2026-02-08 03:48:04.006296 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 03:48:04.006306 | orchestrator | Sunday 08 February 2026 03:48:01 +0000 (0:00:00.679) 0:03:32.990 ******* 2026-02-08 03:48:04.006316 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:04.006325 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:04.006335 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:04.006345 | orchestrator | 2026-02-08 03:48:04.006355 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 03:48:04.006364 | orchestrator | Sunday 08 February 2026 03:48:02 +0000 (0:00:00.726) 0:03:33.717 ******* 2026-02-08 03:48:04.006374 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:04.006384 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:04.006394 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:04.006403 | orchestrator | 2026-02-08 03:48:04.006413 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 03:48:04.006423 | orchestrator | Sunday 08 February 2026 03:48:03 +0000 (0:00:00.588) 0:03:34.305 ******* 2026-02-08 03:48:04.006437 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:04.006453 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:04.006469 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:04.006484 | orchestrator | 2026-02-08 03:48:04.006501 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 03:48:04.006517 | orchestrator | Sunday 08 February 2026 03:48:03 +0000 (0:00:00.349) 0:03:34.654 ******* 2026-02-08 03:48:04.006528 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:04.006538 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:04.006548 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:04.006557 | orchestrator | 2026-02-08 03:48:04.006567 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 03:48:04.006590 | orchestrator | Sunday 08 February 2026 03:48:03 +0000 (0:00:00.332) 0:03:34.987 ******* 2026-02-08 03:48:26.264856 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.264962 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.264976 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.264986 | orchestrator | 2026-02-08 03:48:26.264996 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 03:48:26.265013 | orchestrator | Sunday 08 February 2026 03:48:04 +0000 (0:00:00.784) 0:03:35.772 ******* 2026-02-08 03:48:26.265029 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:26.265046 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:26.265063 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:26.265109 | orchestrator | 2026-02-08 03:48:26.265140 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 03:48:26.265149 | orchestrator | Sunday 08 February 2026 03:48:05 +0000 (0:00:00.659) 0:03:36.431 ******* 2026-02-08 03:48:26.265158 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:26.265197 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:26.265213 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:26.265224 | orchestrator | 2026-02-08 03:48:26.265233 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 03:48:26.265241 | orchestrator | Sunday 08 February 2026 03:48:05 +0000 (0:00:00.342) 0:03:36.773 ******* 2026-02-08 03:48:26.265250 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.265259 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.265267 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.265276 | orchestrator | 2026-02-08 03:48:26.265284 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 03:48:26.265293 | orchestrator | Sunday 08 February 2026 03:48:06 +0000 (0:00:00.727) 0:03:37.501 ******* 2026-02-08 03:48:26.265301 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.265310 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.265319 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.265327 | orchestrator | 2026-02-08 03:48:26.265335 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 03:48:26.265344 | orchestrator | Sunday 08 February 2026 03:48:08 +0000 (0:00:01.547) 0:03:39.048 ******* 2026-02-08 03:48:26.265353 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:26.265361 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:26.265370 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:26.265385 | orchestrator | 2026-02-08 03:48:26.265399 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 03:48:26.265414 | orchestrator | Sunday 08 February 2026 03:48:08 +0000 (0:00:00.603) 0:03:39.652 ******* 2026-02-08 03:48:26.265428 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.265442 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.265456 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.265472 | orchestrator | 2026-02-08 03:48:26.265484 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 03:48:26.265494 | orchestrator | Sunday 08 February 2026 03:48:09 +0000 (0:00:00.376) 0:03:40.029 ******* 2026-02-08 03:48:26.265502 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:26.265511 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:26.265520 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:26.265529 | orchestrator | 2026-02-08 03:48:26.265537 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 03:48:26.265546 | orchestrator | Sunday 08 February 2026 03:48:09 +0000 (0:00:00.327) 0:03:40.356 ******* 2026-02-08 03:48:26.265555 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:26.265563 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:26.265572 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:26.265580 | orchestrator | 2026-02-08 03:48:26.265589 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 03:48:26.265598 | orchestrator | Sunday 08 February 2026 03:48:09 +0000 (0:00:00.346) 0:03:40.703 ******* 2026-02-08 03:48:26.265606 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:26.265615 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:26.265623 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:26.265632 | orchestrator | 2026-02-08 03:48:26.265641 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 03:48:26.265649 | orchestrator | Sunday 08 February 2026 03:48:10 +0000 (0:00:00.619) 0:03:41.323 ******* 2026-02-08 03:48:26.265658 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:26.265666 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:26.265675 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:26.265683 | orchestrator | 2026-02-08 03:48:26.265692 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 03:48:26.265710 | orchestrator | Sunday 08 February 2026 03:48:10 +0000 (0:00:00.330) 0:03:41.653 ******* 2026-02-08 03:48:26.265719 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:26.265727 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:48:26.265737 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:48:26.265753 | orchestrator | 2026-02-08 03:48:26.265767 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 03:48:26.265781 | orchestrator | Sunday 08 February 2026 03:48:10 +0000 (0:00:00.341) 0:03:41.995 ******* 2026-02-08 03:48:26.265796 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.265811 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.265826 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.265839 | orchestrator | 2026-02-08 03:48:26.265848 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 03:48:26.265856 | orchestrator | Sunday 08 February 2026 03:48:11 +0000 (0:00:00.358) 0:03:42.353 ******* 2026-02-08 03:48:26.265865 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.265873 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.265882 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.265890 | orchestrator | 2026-02-08 03:48:26.265899 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 03:48:26.265907 | orchestrator | Sunday 08 February 2026 03:48:11 +0000 (0:00:00.638) 0:03:42.992 ******* 2026-02-08 03:48:26.265916 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.265924 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.265933 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.265941 | orchestrator | 2026-02-08 03:48:26.265950 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-08 03:48:26.265959 | orchestrator | Sunday 08 February 2026 03:48:12 +0000 (0:00:00.600) 0:03:43.592 ******* 2026-02-08 03:48:26.265967 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.265976 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.266002 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.266012 | orchestrator | 2026-02-08 03:48:26.266066 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-08 03:48:26.266076 | orchestrator | Sunday 08 February 2026 03:48:12 +0000 (0:00:00.361) 0:03:43.954 ******* 2026-02-08 03:48:26.266086 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:48:26.266098 | orchestrator | 2026-02-08 03:48:26.266121 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-08 03:48:26.266137 | orchestrator | Sunday 08 February 2026 03:48:13 +0000 (0:00:00.885) 0:03:44.840 ******* 2026-02-08 03:48:26.266151 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:48:26.266235 | orchestrator | 2026-02-08 03:48:26.266252 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-08 03:48:26.266267 | orchestrator | Sunday 08 February 2026 03:48:14 +0000 (0:00:00.172) 0:03:45.012 ******* 2026-02-08 03:48:26.266283 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-08 03:48:26.266298 | orchestrator | 2026-02-08 03:48:26.266307 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-08 03:48:26.266316 | orchestrator | Sunday 08 February 2026 03:48:15 +0000 (0:00:01.096) 0:03:46.109 ******* 2026-02-08 03:48:26.266325 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.266333 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.266342 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.266351 | orchestrator | 2026-02-08 03:48:26.266359 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-08 03:48:26.266368 | orchestrator | Sunday 08 February 2026 03:48:15 +0000 (0:00:00.361) 0:03:46.470 ******* 2026-02-08 03:48:26.266377 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.266385 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.266394 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.266402 | orchestrator | 2026-02-08 03:48:26.266411 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-08 03:48:26.266428 | orchestrator | Sunday 08 February 2026 03:48:16 +0000 (0:00:00.673) 0:03:47.144 ******* 2026-02-08 03:48:26.266437 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:48:26.266446 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:48:26.266454 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:48:26.266463 | orchestrator | 2026-02-08 03:48:26.266471 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-08 03:48:26.266480 | orchestrator | Sunday 08 February 2026 03:48:17 +0000 (0:00:01.196) 0:03:48.340 ******* 2026-02-08 03:48:26.266489 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:48:26.266498 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:48:26.266506 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:48:26.266515 | orchestrator | 2026-02-08 03:48:26.266524 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-08 03:48:26.266532 | orchestrator | Sunday 08 February 2026 03:48:18 +0000 (0:00:00.795) 0:03:49.135 ******* 2026-02-08 03:48:26.266541 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:48:26.266553 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:48:26.266568 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:48:26.266583 | orchestrator | 2026-02-08 03:48:26.266597 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-08 03:48:26.266612 | orchestrator | Sunday 08 February 2026 03:48:18 +0000 (0:00:00.706) 0:03:49.842 ******* 2026-02-08 03:48:26.266627 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.266642 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:48:26.266658 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:48:26.266673 | orchestrator | 2026-02-08 03:48:26.266682 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-08 03:48:26.266690 | orchestrator | Sunday 08 February 2026 03:48:19 +0000 (0:00:01.047) 0:03:50.889 ******* 2026-02-08 03:48:26.266699 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:48:26.266707 | orchestrator | 2026-02-08 03:48:26.266716 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-08 03:48:26.266724 | orchestrator | Sunday 08 February 2026 03:48:21 +0000 (0:00:01.328) 0:03:52.218 ******* 2026-02-08 03:48:26.266733 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:48:26.266741 | orchestrator | 2026-02-08 03:48:26.266750 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-08 03:48:26.266758 | orchestrator | Sunday 08 February 2026 03:48:21 +0000 (0:00:00.720) 0:03:52.938 ******* 2026-02-08 03:48:26.266767 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-08 03:48:26.266776 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:48:26.266785 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:48:26.266793 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 03:48:26.266802 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-08 03:48:26.266810 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 03:48:26.266819 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 03:48:26.266827 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-08 03:48:26.266836 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 03:48:26.266845 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-08 03:48:26.266853 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-08 03:48:26.266862 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-08 03:48:26.266870 | orchestrator | 2026-02-08 03:48:26.266879 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-08 03:48:26.266888 | orchestrator | Sunday 08 February 2026 03:48:25 +0000 (0:00:03.131) 0:03:56.070 ******* 2026-02-08 03:48:26.266896 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:48:26.266905 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:48:26.266922 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:48:26.266937 | orchestrator | 2026-02-08 03:48:26.266952 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-08 03:48:26.266978 | orchestrator | Sunday 08 February 2026 03:48:26 +0000 (0:00:01.172) 0:03:57.242 ******* 2026-02-08 03:49:27.871551 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:27.871651 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:27.871662 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:27.871670 | orchestrator | 2026-02-08 03:49:27.871680 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-08 03:49:27.871689 | orchestrator | Sunday 08 February 2026 03:48:26 +0000 (0:00:00.649) 0:03:57.892 ******* 2026-02-08 03:49:27.871696 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:27.871704 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:27.871711 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:27.871717 | orchestrator | 2026-02-08 03:49:27.871738 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-08 03:49:27.871746 | orchestrator | Sunday 08 February 2026 03:48:27 +0000 (0:00:00.357) 0:03:58.249 ******* 2026-02-08 03:49:27.871754 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:49:27.871762 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:49:27.871769 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:49:27.871777 | orchestrator | 2026-02-08 03:49:27.871784 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-08 03:49:27.871791 | orchestrator | Sunday 08 February 2026 03:48:28 +0000 (0:00:01.514) 0:03:59.764 ******* 2026-02-08 03:49:27.871798 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:49:27.871805 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:49:27.871813 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:49:27.871821 | orchestrator | 2026-02-08 03:49:27.871828 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-08 03:49:27.871835 | orchestrator | Sunday 08 February 2026 03:48:30 +0000 (0:00:01.287) 0:04:01.051 ******* 2026-02-08 03:49:27.871842 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:27.871849 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:27.871856 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:27.871862 | orchestrator | 2026-02-08 03:49:27.871869 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-08 03:49:27.871876 | orchestrator | Sunday 08 February 2026 03:48:30 +0000 (0:00:00.625) 0:04:01.677 ******* 2026-02-08 03:49:27.871884 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:49:27.871891 | orchestrator | 2026-02-08 03:49:27.871898 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-08 03:49:27.871905 | orchestrator | Sunday 08 February 2026 03:48:31 +0000 (0:00:00.586) 0:04:02.263 ******* 2026-02-08 03:49:27.871911 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:27.871918 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:27.871926 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:27.871934 | orchestrator | 2026-02-08 03:49:27.871940 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-08 03:49:27.871947 | orchestrator | Sunday 08 February 2026 03:48:31 +0000 (0:00:00.382) 0:04:02.647 ******* 2026-02-08 03:49:27.871954 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:27.871960 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:27.871967 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:27.871974 | orchestrator | 2026-02-08 03:49:27.871981 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-08 03:49:27.871988 | orchestrator | Sunday 08 February 2026 03:48:32 +0000 (0:00:00.631) 0:04:03.278 ******* 2026-02-08 03:49:27.871995 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:49:27.872003 | orchestrator | 2026-02-08 03:49:27.872010 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-08 03:49:27.872041 | orchestrator | Sunday 08 February 2026 03:48:32 +0000 (0:00:00.658) 0:04:03.937 ******* 2026-02-08 03:49:27.872048 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:49:27.872054 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:49:27.872062 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:49:27.872069 | orchestrator | 2026-02-08 03:49:27.872075 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-08 03:49:27.872082 | orchestrator | Sunday 08 February 2026 03:48:34 +0000 (0:00:01.866) 0:04:05.804 ******* 2026-02-08 03:49:27.872088 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:49:27.872096 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:49:27.872103 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:49:27.872111 | orchestrator | 2026-02-08 03:49:27.872117 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-08 03:49:27.872124 | orchestrator | Sunday 08 February 2026 03:48:36 +0000 (0:00:01.492) 0:04:07.296 ******* 2026-02-08 03:49:27.872131 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:49:27.872138 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:49:27.872146 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:49:27.872152 | orchestrator | 2026-02-08 03:49:27.872159 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-08 03:49:27.872167 | orchestrator | Sunday 08 February 2026 03:48:38 +0000 (0:00:01.816) 0:04:09.112 ******* 2026-02-08 03:49:27.872174 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:49:27.872181 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:49:27.872188 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:49:27.872195 | orchestrator | 2026-02-08 03:49:27.872230 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-08 03:49:27.872239 | orchestrator | Sunday 08 February 2026 03:48:40 +0000 (0:00:01.994) 0:04:11.106 ******* 2026-02-08 03:49:27.872246 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:49:27.872254 | orchestrator | 2026-02-08 03:49:27.872261 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-08 03:49:27.872268 | orchestrator | Sunday 08 February 2026 03:48:41 +0000 (0:00:00.929) 0:04:12.035 ******* 2026-02-08 03:49:27.872276 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-08 03:49:27.872283 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:27.872290 | orchestrator | 2026-02-08 03:49:27.872297 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-08 03:49:27.872331 | orchestrator | Sunday 08 February 2026 03:49:02 +0000 (0:00:21.809) 0:04:33.844 ******* 2026-02-08 03:49:27.872339 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:27.872347 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:27.872354 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:27.872361 | orchestrator | 2026-02-08 03:49:27.872368 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-08 03:49:27.872375 | orchestrator | Sunday 08 February 2026 03:49:11 +0000 (0:00:08.688) 0:04:42.533 ******* 2026-02-08 03:49:27.872382 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:27.872395 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:27.872402 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:27.872409 | orchestrator | 2026-02-08 03:49:27.872416 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-08 03:49:27.872423 | orchestrator | Sunday 08 February 2026 03:49:11 +0000 (0:00:00.342) 0:04:42.876 ******* 2026-02-08 03:49:27.872433 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__51235e5960e1e5cb4b9c4a4d3e6ce9ba4c4025ff'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-08 03:49:27.872451 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__51235e5960e1e5cb4b9c4a4d3e6ce9ba4c4025ff'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-08 03:49:27.872460 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__51235e5960e1e5cb4b9c4a4d3e6ce9ba4c4025ff'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-08 03:49:27.872469 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__51235e5960e1e5cb4b9c4a4d3e6ce9ba4c4025ff'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-08 03:49:27.872478 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__51235e5960e1e5cb4b9c4a4d3e6ce9ba4c4025ff'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-08 03:49:27.872486 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__51235e5960e1e5cb4b9c4a4d3e6ce9ba4c4025ff'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__51235e5960e1e5cb4b9c4a4d3e6ce9ba4c4025ff'}])  2026-02-08 03:49:27.872495 | orchestrator | 2026-02-08 03:49:27.872502 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 03:49:27.872509 | orchestrator | Sunday 08 February 2026 03:49:25 +0000 (0:00:14.075) 0:04:56.951 ******* 2026-02-08 03:49:27.872516 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:27.872523 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:27.872530 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:27.872536 | orchestrator | 2026-02-08 03:49:27.872543 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-08 03:49:27.872550 | orchestrator | Sunday 08 February 2026 03:49:26 +0000 (0:00:00.372) 0:04:57.323 ******* 2026-02-08 03:49:27.872556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:49:27.872563 | orchestrator | 2026-02-08 03:49:27.872570 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-08 03:49:27.872576 | orchestrator | Sunday 08 February 2026 03:49:27 +0000 (0:00:00.838) 0:04:58.162 ******* 2026-02-08 03:49:27.872582 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:27.872589 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:27.872596 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:27.872603 | orchestrator | 2026-02-08 03:49:27.872609 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-08 03:49:27.872616 | orchestrator | Sunday 08 February 2026 03:49:27 +0000 (0:00:00.344) 0:04:58.507 ******* 2026-02-08 03:49:27.872623 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:27.872630 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:27.872637 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:27.872644 | orchestrator | 2026-02-08 03:49:27.872655 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-08 03:49:54.495741 | orchestrator | Sunday 08 February 2026 03:49:27 +0000 (0:00:00.342) 0:04:58.849 ******* 2026-02-08 03:49:54.495872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 03:49:54.495887 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 03:49:54.495896 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 03:49:54.495905 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.495914 | orchestrator | 2026-02-08 03:49:54.495938 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-08 03:49:54.495947 | orchestrator | Sunday 08 February 2026 03:49:28 +0000 (0:00:00.965) 0:04:59.815 ******* 2026-02-08 03:49:54.495956 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.495965 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:54.495974 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:54.495983 | orchestrator | 2026-02-08 03:49:54.495991 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-08 03:49:54.496000 | orchestrator | 2026-02-08 03:49:54.496008 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 03:49:54.496017 | orchestrator | Sunday 08 February 2026 03:49:29 +0000 (0:00:00.892) 0:05:00.707 ******* 2026-02-08 03:49:54.496026 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:49:54.496037 | orchestrator | 2026-02-08 03:49:54.496045 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 03:49:54.496054 | orchestrator | Sunday 08 February 2026 03:49:30 +0000 (0:00:00.587) 0:05:01.295 ******* 2026-02-08 03:49:54.496063 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:49:54.496071 | orchestrator | 2026-02-08 03:49:54.496080 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 03:49:54.496089 | orchestrator | Sunday 08 February 2026 03:49:31 +0000 (0:00:01.022) 0:05:02.318 ******* 2026-02-08 03:49:54.496097 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.496106 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:54.496117 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:54.496132 | orchestrator | 2026-02-08 03:49:54.496145 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 03:49:54.496158 | orchestrator | Sunday 08 February 2026 03:49:32 +0000 (0:00:00.874) 0:05:03.193 ******* 2026-02-08 03:49:54.496171 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.496183 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.496196 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.496208 | orchestrator | 2026-02-08 03:49:54.496289 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 03:49:54.496304 | orchestrator | Sunday 08 February 2026 03:49:32 +0000 (0:00:00.321) 0:05:03.514 ******* 2026-02-08 03:49:54.496319 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.496336 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.496351 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.496379 | orchestrator | 2026-02-08 03:49:54.496394 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 03:49:54.496411 | orchestrator | Sunday 08 February 2026 03:49:33 +0000 (0:00:00.604) 0:05:04.118 ******* 2026-02-08 03:49:54.496427 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.496442 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.496457 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.496467 | orchestrator | 2026-02-08 03:49:54.496477 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 03:49:54.496488 | orchestrator | Sunday 08 February 2026 03:49:33 +0000 (0:00:00.336) 0:05:04.455 ******* 2026-02-08 03:49:54.496498 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.496508 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:54.496518 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:54.496528 | orchestrator | 2026-02-08 03:49:54.496539 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 03:49:54.496560 | orchestrator | Sunday 08 February 2026 03:49:34 +0000 (0:00:00.739) 0:05:05.195 ******* 2026-02-08 03:49:54.496570 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.496580 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.496591 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.496599 | orchestrator | 2026-02-08 03:49:54.496608 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 03:49:54.496617 | orchestrator | Sunday 08 February 2026 03:49:34 +0000 (0:00:00.336) 0:05:05.531 ******* 2026-02-08 03:49:54.496625 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.496634 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.496648 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.496661 | orchestrator | 2026-02-08 03:49:54.496673 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 03:49:54.496686 | orchestrator | Sunday 08 February 2026 03:49:35 +0000 (0:00:00.603) 0:05:06.134 ******* 2026-02-08 03:49:54.496700 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.496714 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:54.496727 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:54.496740 | orchestrator | 2026-02-08 03:49:54.496754 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 03:49:54.496768 | orchestrator | Sunday 08 February 2026 03:49:35 +0000 (0:00:00.730) 0:05:06.865 ******* 2026-02-08 03:49:54.496784 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.496798 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:54.496812 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:54.496822 | orchestrator | 2026-02-08 03:49:54.496831 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 03:49:54.496839 | orchestrator | Sunday 08 February 2026 03:49:36 +0000 (0:00:00.723) 0:05:07.589 ******* 2026-02-08 03:49:54.496848 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.496858 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.496866 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.496875 | orchestrator | 2026-02-08 03:49:54.496883 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 03:49:54.496910 | orchestrator | Sunday 08 February 2026 03:49:36 +0000 (0:00:00.344) 0:05:07.934 ******* 2026-02-08 03:49:54.496920 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.496928 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:54.496937 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:54.496945 | orchestrator | 2026-02-08 03:49:54.496954 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 03:49:54.496962 | orchestrator | Sunday 08 February 2026 03:49:37 +0000 (0:00:00.632) 0:05:08.567 ******* 2026-02-08 03:49:54.496971 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.496979 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.496997 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.497006 | orchestrator | 2026-02-08 03:49:54.497014 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 03:49:54.497023 | orchestrator | Sunday 08 February 2026 03:49:37 +0000 (0:00:00.353) 0:05:08.920 ******* 2026-02-08 03:49:54.497031 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.497040 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.497048 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.497057 | orchestrator | 2026-02-08 03:49:54.497065 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 03:49:54.497074 | orchestrator | Sunday 08 February 2026 03:49:38 +0000 (0:00:00.367) 0:05:09.287 ******* 2026-02-08 03:49:54.497082 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.497091 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.497099 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.497108 | orchestrator | 2026-02-08 03:49:54.497116 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 03:49:54.497125 | orchestrator | Sunday 08 February 2026 03:49:38 +0000 (0:00:00.347) 0:05:09.634 ******* 2026-02-08 03:49:54.497141 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.497149 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.497158 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.497166 | orchestrator | 2026-02-08 03:49:54.497175 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 03:49:54.497189 | orchestrator | Sunday 08 February 2026 03:49:39 +0000 (0:00:00.609) 0:05:10.244 ******* 2026-02-08 03:49:54.497203 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.497234 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.497250 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.497265 | orchestrator | 2026-02-08 03:49:54.497280 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 03:49:54.497294 | orchestrator | Sunday 08 February 2026 03:49:39 +0000 (0:00:00.348) 0:05:10.593 ******* 2026-02-08 03:49:54.497309 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.497319 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:54.497328 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:54.497336 | orchestrator | 2026-02-08 03:49:54.497349 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 03:49:54.497362 | orchestrator | Sunday 08 February 2026 03:49:39 +0000 (0:00:00.352) 0:05:10.945 ******* 2026-02-08 03:49:54.497375 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.497388 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:54.497402 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:54.497417 | orchestrator | 2026-02-08 03:49:54.497433 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 03:49:54.497448 | orchestrator | Sunday 08 February 2026 03:49:40 +0000 (0:00:00.383) 0:05:11.328 ******* 2026-02-08 03:49:54.497464 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.497479 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:49:54.497494 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:49:54.497503 | orchestrator | 2026-02-08 03:49:54.497512 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-08 03:49:54.497520 | orchestrator | Sunday 08 February 2026 03:49:41 +0000 (0:00:00.886) 0:05:12.215 ******* 2026-02-08 03:49:54.497529 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 03:49:54.497538 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:49:54.497548 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:49:54.497556 | orchestrator | 2026-02-08 03:49:54.497565 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-08 03:49:54.497573 | orchestrator | Sunday 08 February 2026 03:49:41 +0000 (0:00:00.711) 0:05:12.927 ******* 2026-02-08 03:49:54.497582 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:49:54.497591 | orchestrator | 2026-02-08 03:49:54.497599 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-08 03:49:54.497608 | orchestrator | Sunday 08 February 2026 03:49:42 +0000 (0:00:00.561) 0:05:13.488 ******* 2026-02-08 03:49:54.497616 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:49:54.497625 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:49:54.497633 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:49:54.497642 | orchestrator | 2026-02-08 03:49:54.497650 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-08 03:49:54.497659 | orchestrator | Sunday 08 February 2026 03:49:43 +0000 (0:00:01.027) 0:05:14.515 ******* 2026-02-08 03:49:54.497667 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:49:54.497676 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:49:54.497684 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:49:54.497693 | orchestrator | 2026-02-08 03:49:54.497702 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-08 03:49:54.497710 | orchestrator | Sunday 08 February 2026 03:49:43 +0000 (0:00:00.381) 0:05:14.896 ******* 2026-02-08 03:49:54.497726 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-08 03:49:54.497735 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-08 03:49:54.497744 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-08 03:49:54.497753 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-08 03:49:54.497761 | orchestrator | 2026-02-08 03:49:54.497770 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-08 03:49:54.497782 | orchestrator | Sunday 08 February 2026 03:49:54 +0000 (0:00:10.198) 0:05:25.095 ******* 2026-02-08 03:49:54.497796 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:49:54.497819 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:50:50.164472 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:50:50.164602 | orchestrator | 2026-02-08 03:50:50.164630 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-08 03:50:50.164669 | orchestrator | Sunday 08 February 2026 03:49:54 +0000 (0:00:00.387) 0:05:25.482 ******* 2026-02-08 03:50:50.164697 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-08 03:50:50.164709 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-08 03:50:50.164737 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-08 03:50:50.164749 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-08 03:50:50.164760 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:50:50.164771 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:50:50.164783 | orchestrator | 2026-02-08 03:50:50.164793 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-08 03:50:50.164804 | orchestrator | Sunday 08 February 2026 03:49:56 +0000 (0:00:02.470) 0:05:27.953 ******* 2026-02-08 03:50:50.164815 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-08 03:50:50.164826 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-08 03:50:50.164837 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-08 03:50:50.164847 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-08 03:50:50.164858 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-08 03:50:50.164869 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-08 03:50:50.164879 | orchestrator | 2026-02-08 03:50:50.164890 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-08 03:50:50.164901 | orchestrator | Sunday 08 February 2026 03:49:58 +0000 (0:00:01.274) 0:05:29.228 ******* 2026-02-08 03:50:50.164914 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:50:50.164927 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:50:50.164939 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:50:50.164952 | orchestrator | 2026-02-08 03:50:50.164964 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-08 03:50:50.164977 | orchestrator | Sunday 08 February 2026 03:49:58 +0000 (0:00:00.711) 0:05:29.939 ******* 2026-02-08 03:50:50.164990 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:50:50.165002 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:50:50.165014 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:50:50.165027 | orchestrator | 2026-02-08 03:50:50.165039 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-08 03:50:50.165052 | orchestrator | Sunday 08 February 2026 03:49:59 +0000 (0:00:00.374) 0:05:30.314 ******* 2026-02-08 03:50:50.165063 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:50:50.165074 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:50:50.165084 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:50:50.165094 | orchestrator | 2026-02-08 03:50:50.165104 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-08 03:50:50.165114 | orchestrator | Sunday 08 February 2026 03:49:59 +0000 (0:00:00.599) 0:05:30.914 ******* 2026-02-08 03:50:50.165124 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:50:50.165134 | orchestrator | 2026-02-08 03:50:50.165144 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-08 03:50:50.165187 | orchestrator | Sunday 08 February 2026 03:50:00 +0000 (0:00:00.561) 0:05:31.476 ******* 2026-02-08 03:50:50.165205 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:50:50.165220 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:50:50.165238 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:50:50.165320 | orchestrator | 2026-02-08 03:50:50.165339 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-08 03:50:50.165354 | orchestrator | Sunday 08 February 2026 03:50:00 +0000 (0:00:00.352) 0:05:31.829 ******* 2026-02-08 03:50:50.165373 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:50:50.165394 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:50:50.165413 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:50:50.165429 | orchestrator | 2026-02-08 03:50:50.165444 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-08 03:50:50.165461 | orchestrator | Sunday 08 February 2026 03:50:01 +0000 (0:00:00.631) 0:05:32.460 ******* 2026-02-08 03:50:50.165477 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:50:50.165494 | orchestrator | 2026-02-08 03:50:50.165510 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-08 03:50:50.165526 | orchestrator | Sunday 08 February 2026 03:50:02 +0000 (0:00:00.633) 0:05:33.093 ******* 2026-02-08 03:50:50.165541 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:50:50.165557 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:50:50.165572 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:50:50.165589 | orchestrator | 2026-02-08 03:50:50.165606 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-08 03:50:50.165621 | orchestrator | Sunday 08 February 2026 03:50:03 +0000 (0:00:01.236) 0:05:34.329 ******* 2026-02-08 03:50:50.165638 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:50:50.165653 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:50:50.165669 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:50:50.165684 | orchestrator | 2026-02-08 03:50:50.165700 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-08 03:50:50.165716 | orchestrator | Sunday 08 February 2026 03:50:04 +0000 (0:00:01.491) 0:05:35.821 ******* 2026-02-08 03:50:50.165731 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:50:50.165747 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:50:50.165762 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:50:50.165778 | orchestrator | 2026-02-08 03:50:50.165794 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-08 03:50:50.165929 | orchestrator | Sunday 08 February 2026 03:50:06 +0000 (0:00:01.810) 0:05:37.631 ******* 2026-02-08 03:50:50.165951 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:50:50.165967 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:50:50.165983 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:50:50.165999 | orchestrator | 2026-02-08 03:50:50.166114 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-08 03:50:50.166139 | orchestrator | Sunday 08 February 2026 03:50:08 +0000 (0:00:01.969) 0:05:39.601 ******* 2026-02-08 03:50:50.166156 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:50:50.166171 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:50:50.166188 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-08 03:50:50.166204 | orchestrator | 2026-02-08 03:50:50.166232 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-08 03:50:50.166280 | orchestrator | Sunday 08 February 2026 03:50:09 +0000 (0:00:00.708) 0:05:40.309 ******* 2026-02-08 03:50:50.166298 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-08 03:50:50.166315 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-08 03:50:50.166333 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-08 03:50:50.166367 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-08 03:50:50.166385 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:50:50.166402 | orchestrator | 2026-02-08 03:50:50.166419 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-08 03:50:50.166437 | orchestrator | Sunday 08 February 2026 03:50:33 +0000 (0:00:24.098) 0:06:04.408 ******* 2026-02-08 03:50:50.166455 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:50:50.166473 | orchestrator | 2026-02-08 03:50:50.166489 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-08 03:50:50.166506 | orchestrator | Sunday 08 February 2026 03:50:34 +0000 (0:00:01.204) 0:06:05.613 ******* 2026-02-08 03:50:50.166523 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:50:50.166540 | orchestrator | 2026-02-08 03:50:50.166557 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-08 03:50:50.166573 | orchestrator | Sunday 08 February 2026 03:50:34 +0000 (0:00:00.341) 0:06:05.954 ******* 2026-02-08 03:50:50.166590 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:50:50.166607 | orchestrator | 2026-02-08 03:50:50.166624 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-08 03:50:50.166642 | orchestrator | Sunday 08 February 2026 03:50:35 +0000 (0:00:00.173) 0:06:06.127 ******* 2026-02-08 03:50:50.166709 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-08 03:50:50.166732 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-08 03:50:50.166748 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-08 03:50:50.166765 | orchestrator | 2026-02-08 03:50:50.166781 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-08 03:50:50.166797 | orchestrator | Sunday 08 February 2026 03:50:41 +0000 (0:00:06.363) 0:06:12.491 ******* 2026-02-08 03:50:50.166813 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-08 03:50:50.166830 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-08 03:50:50.166846 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-08 03:50:50.166863 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-08 03:50:50.166878 | orchestrator | 2026-02-08 03:50:50.166895 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 03:50:50.166911 | orchestrator | Sunday 08 February 2026 03:50:46 +0000 (0:00:04.903) 0:06:17.394 ******* 2026-02-08 03:50:50.166928 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:50:50.166943 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:50:50.166959 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:50:50.166975 | orchestrator | 2026-02-08 03:50:50.166992 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-08 03:50:50.167008 | orchestrator | Sunday 08 February 2026 03:50:47 +0000 (0:00:00.729) 0:06:18.124 ******* 2026-02-08 03:50:50.167024 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:50:50.167041 | orchestrator | 2026-02-08 03:50:50.167058 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-08 03:50:50.167074 | orchestrator | Sunday 08 February 2026 03:50:47 +0000 (0:00:00.574) 0:06:18.698 ******* 2026-02-08 03:50:50.167090 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:50:50.167106 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:50:50.167123 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:50:50.167140 | orchestrator | 2026-02-08 03:50:50.167156 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-08 03:50:50.167172 | orchestrator | Sunday 08 February 2026 03:50:48 +0000 (0:00:00.616) 0:06:19.314 ******* 2026-02-08 03:50:50.167201 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:50:50.167218 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:50:50.167234 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:50:50.167319 | orchestrator | 2026-02-08 03:50:50.167338 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-08 03:50:50.167354 | orchestrator | Sunday 08 February 2026 03:50:49 +0000 (0:00:01.158) 0:06:20.473 ******* 2026-02-08 03:50:50.167370 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 03:50:50.167386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 03:50:50.167402 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 03:50:50.167418 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:50:50.167434 | orchestrator | 2026-02-08 03:50:50.167449 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-08 03:50:50.167479 | orchestrator | Sunday 08 February 2026 03:50:50 +0000 (0:00:00.675) 0:06:21.149 ******* 2026-02-08 03:51:08.585893 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:51:08.586138 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:51:08.586177 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:51:08.586199 | orchestrator | 2026-02-08 03:51:08.586223 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-08 03:51:08.586248 | orchestrator | 2026-02-08 03:51:08.586299 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 03:51:08.586344 | orchestrator | Sunday 08 February 2026 03:50:50 +0000 (0:00:00.610) 0:06:21.760 ******* 2026-02-08 03:51:08.586367 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:51:08.586390 | orchestrator | 2026-02-08 03:51:08.586411 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 03:51:08.586432 | orchestrator | Sunday 08 February 2026 03:50:51 +0000 (0:00:00.836) 0:06:22.597 ******* 2026-02-08 03:51:08.586450 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:51:08.586469 | orchestrator | 2026-02-08 03:51:08.586488 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 03:51:08.586507 | orchestrator | Sunday 08 February 2026 03:50:52 +0000 (0:00:00.785) 0:06:23.382 ******* 2026-02-08 03:51:08.586529 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.586551 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.586572 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.586592 | orchestrator | 2026-02-08 03:51:08.586611 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 03:51:08.586630 | orchestrator | Sunday 08 February 2026 03:50:52 +0000 (0:00:00.392) 0:06:23.775 ******* 2026-02-08 03:51:08.586648 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.586667 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.586687 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.586706 | orchestrator | 2026-02-08 03:51:08.586723 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 03:51:08.586740 | orchestrator | Sunday 08 February 2026 03:50:53 +0000 (0:00:00.706) 0:06:24.482 ******* 2026-02-08 03:51:08.586759 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.586778 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.586798 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.586817 | orchestrator | 2026-02-08 03:51:08.586835 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 03:51:08.586855 | orchestrator | Sunday 08 February 2026 03:50:54 +0000 (0:00:00.693) 0:06:25.176 ******* 2026-02-08 03:51:08.586868 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.586879 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.586889 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.586900 | orchestrator | 2026-02-08 03:51:08.586911 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 03:51:08.586951 | orchestrator | Sunday 08 February 2026 03:50:55 +0000 (0:00:00.998) 0:06:26.174 ******* 2026-02-08 03:51:08.586962 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.586973 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.586984 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.586994 | orchestrator | 2026-02-08 03:51:08.587005 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 03:51:08.587016 | orchestrator | Sunday 08 February 2026 03:50:55 +0000 (0:00:00.349) 0:06:26.524 ******* 2026-02-08 03:51:08.587026 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.587037 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.587048 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.587059 | orchestrator | 2026-02-08 03:51:08.587069 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 03:51:08.587080 | orchestrator | Sunday 08 February 2026 03:50:55 +0000 (0:00:00.339) 0:06:26.863 ******* 2026-02-08 03:51:08.587091 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.587101 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.587112 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.587123 | orchestrator | 2026-02-08 03:51:08.587133 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 03:51:08.587144 | orchestrator | Sunday 08 February 2026 03:50:56 +0000 (0:00:00.325) 0:06:27.188 ******* 2026-02-08 03:51:08.587155 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.587165 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.587176 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.587186 | orchestrator | 2026-02-08 03:51:08.587198 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 03:51:08.587209 | orchestrator | Sunday 08 February 2026 03:50:57 +0000 (0:00:00.976) 0:06:28.165 ******* 2026-02-08 03:51:08.587219 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.587470 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.587505 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.587516 | orchestrator | 2026-02-08 03:51:08.587526 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 03:51:08.587538 | orchestrator | Sunday 08 February 2026 03:50:57 +0000 (0:00:00.681) 0:06:28.846 ******* 2026-02-08 03:51:08.587549 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.587560 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.587571 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.587581 | orchestrator | 2026-02-08 03:51:08.587592 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 03:51:08.587603 | orchestrator | Sunday 08 February 2026 03:50:58 +0000 (0:00:00.374) 0:06:29.220 ******* 2026-02-08 03:51:08.587614 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.587624 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.587635 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.587646 | orchestrator | 2026-02-08 03:51:08.587656 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 03:51:08.587667 | orchestrator | Sunday 08 February 2026 03:50:58 +0000 (0:00:00.345) 0:06:29.565 ******* 2026-02-08 03:51:08.587678 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.587688 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.587699 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.587709 | orchestrator | 2026-02-08 03:51:08.587720 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 03:51:08.587756 | orchestrator | Sunday 08 February 2026 03:50:59 +0000 (0:00:00.691) 0:06:30.257 ******* 2026-02-08 03:51:08.587767 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.587778 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.587789 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.587799 | orchestrator | 2026-02-08 03:51:08.587810 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 03:51:08.587833 | orchestrator | Sunday 08 February 2026 03:50:59 +0000 (0:00:00.402) 0:06:30.660 ******* 2026-02-08 03:51:08.587858 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.587869 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.587880 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.587890 | orchestrator | 2026-02-08 03:51:08.587901 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 03:51:08.587912 | orchestrator | Sunday 08 February 2026 03:51:00 +0000 (0:00:00.356) 0:06:31.016 ******* 2026-02-08 03:51:08.587922 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.587933 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.587944 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.587955 | orchestrator | 2026-02-08 03:51:08.587965 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 03:51:08.587976 | orchestrator | Sunday 08 February 2026 03:51:00 +0000 (0:00:00.340) 0:06:31.357 ******* 2026-02-08 03:51:08.587987 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.587998 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.588008 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.588019 | orchestrator | 2026-02-08 03:51:08.588030 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 03:51:08.588041 | orchestrator | Sunday 08 February 2026 03:51:01 +0000 (0:00:00.651) 0:06:32.009 ******* 2026-02-08 03:51:08.588051 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.588062 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.588072 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.588083 | orchestrator | 2026-02-08 03:51:08.588094 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 03:51:08.588104 | orchestrator | Sunday 08 February 2026 03:51:01 +0000 (0:00:00.334) 0:06:32.343 ******* 2026-02-08 03:51:08.588115 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.588126 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.588137 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.588148 | orchestrator | 2026-02-08 03:51:08.588159 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 03:51:08.588169 | orchestrator | Sunday 08 February 2026 03:51:01 +0000 (0:00:00.375) 0:06:32.719 ******* 2026-02-08 03:51:08.588180 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.588191 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.588201 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.588232 | orchestrator | 2026-02-08 03:51:08.588244 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-08 03:51:08.588287 | orchestrator | Sunday 08 February 2026 03:51:02 +0000 (0:00:00.893) 0:06:33.613 ******* 2026-02-08 03:51:08.588306 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.588317 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.588328 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.588338 | orchestrator | 2026-02-08 03:51:08.588349 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-08 03:51:08.588360 | orchestrator | Sunday 08 February 2026 03:51:02 +0000 (0:00:00.380) 0:06:33.993 ******* 2026-02-08 03:51:08.588371 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 03:51:08.588382 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:51:08.588404 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:51:08.588415 | orchestrator | 2026-02-08 03:51:08.588426 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-08 03:51:08.588437 | orchestrator | Sunday 08 February 2026 03:51:03 +0000 (0:00:00.707) 0:06:34.701 ******* 2026-02-08 03:51:08.588448 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:51:08.588459 | orchestrator | 2026-02-08 03:51:08.588470 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-08 03:51:08.588481 | orchestrator | Sunday 08 February 2026 03:51:04 +0000 (0:00:00.634) 0:06:35.336 ******* 2026-02-08 03:51:08.588498 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.588542 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.588564 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.588575 | orchestrator | 2026-02-08 03:51:08.588586 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-08 03:51:08.588596 | orchestrator | Sunday 08 February 2026 03:51:04 +0000 (0:00:00.616) 0:06:35.952 ******* 2026-02-08 03:51:08.588607 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:51:08.588618 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:51:08.588629 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:51:08.588640 | orchestrator | 2026-02-08 03:51:08.588650 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-08 03:51:08.588661 | orchestrator | Sunday 08 February 2026 03:51:05 +0000 (0:00:00.441) 0:06:36.394 ******* 2026-02-08 03:51:08.588672 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.588683 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.588694 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.588704 | orchestrator | 2026-02-08 03:51:08.588715 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-08 03:51:08.588726 | orchestrator | Sunday 08 February 2026 03:51:06 +0000 (0:00:00.622) 0:06:37.016 ******* 2026-02-08 03:51:08.588737 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:51:08.588748 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:51:08.588759 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:51:08.588770 | orchestrator | 2026-02-08 03:51:08.588780 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-08 03:51:08.588791 | orchestrator | Sunday 08 February 2026 03:51:06 +0000 (0:00:00.654) 0:06:37.671 ******* 2026-02-08 03:51:08.588802 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-08 03:51:08.588823 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-08 03:52:08.892279 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-08 03:52:08.892443 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-08 03:52:08.892471 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-08 03:52:08.892479 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-08 03:52:08.892486 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-08 03:52:08.892493 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-08 03:52:08.892501 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-08 03:52:08.892508 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-08 03:52:08.892515 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-08 03:52:08.892522 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-08 03:52:08.892529 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-08 03:52:08.892535 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-08 03:52:08.892542 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-08 03:52:08.892549 | orchestrator | 2026-02-08 03:52:08.892556 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-08 03:52:08.892563 | orchestrator | Sunday 08 February 2026 03:51:08 +0000 (0:00:01.899) 0:06:39.570 ******* 2026-02-08 03:52:08.892570 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:08.892578 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:08.892584 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:08.892591 | orchestrator | 2026-02-08 03:52:08.892617 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-08 03:52:08.892625 | orchestrator | Sunday 08 February 2026 03:51:08 +0000 (0:00:00.354) 0:06:39.925 ******* 2026-02-08 03:52:08.892631 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:52:08.892638 | orchestrator | 2026-02-08 03:52:08.892645 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-08 03:52:08.892651 | orchestrator | Sunday 08 February 2026 03:51:09 +0000 (0:00:00.833) 0:06:40.758 ******* 2026-02-08 03:52:08.892658 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-08 03:52:08.892665 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-08 03:52:08.892671 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-08 03:52:08.892679 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-08 03:52:08.892686 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-08 03:52:08.892692 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-08 03:52:08.892699 | orchestrator | 2026-02-08 03:52:08.892706 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-08 03:52:08.892712 | orchestrator | Sunday 08 February 2026 03:51:10 +0000 (0:00:00.979) 0:06:41.737 ******* 2026-02-08 03:52:08.892719 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:52:08.892726 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 03:52:08.892732 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 03:52:08.892739 | orchestrator | 2026-02-08 03:52:08.892746 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-08 03:52:08.892752 | orchestrator | Sunday 08 February 2026 03:51:12 +0000 (0:00:01.951) 0:06:43.689 ******* 2026-02-08 03:52:08.892759 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-08 03:52:08.892766 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 03:52:08.892773 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:52:08.892780 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-08 03:52:08.892786 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-08 03:52:08.892793 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:52:08.892800 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-08 03:52:08.892807 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-08 03:52:08.892813 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:52:08.892821 | orchestrator | 2026-02-08 03:52:08.892829 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-08 03:52:08.892837 | orchestrator | Sunday 08 February 2026 03:51:13 +0000 (0:00:01.173) 0:06:44.862 ******* 2026-02-08 03:52:08.892849 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:52:08.892860 | orchestrator | 2026-02-08 03:52:08.892871 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-08 03:52:08.892882 | orchestrator | Sunday 08 February 2026 03:51:15 +0000 (0:00:01.948) 0:06:46.811 ******* 2026-02-08 03:52:08.892893 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:52:08.892905 | orchestrator | 2026-02-08 03:52:08.892916 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-08 03:52:08.892926 | orchestrator | Sunday 08 February 2026 03:51:16 +0000 (0:00:00.850) 0:06:47.661 ******* 2026-02-08 03:52:08.892956 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'}) 2026-02-08 03:52:08.892970 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'}) 2026-02-08 03:52:08.892988 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'}) 2026-02-08 03:52:08.893026 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'}) 2026-02-08 03:52:08.893039 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'}) 2026-02-08 03:52:08.893061 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'}) 2026-02-08 03:52:08.893073 | orchestrator | 2026-02-08 03:52:08.893085 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-08 03:52:08.893098 | orchestrator | Sunday 08 February 2026 03:51:57 +0000 (0:00:40.440) 0:07:28.101 ******* 2026-02-08 03:52:08.893109 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:08.893121 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:08.893132 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:08.893144 | orchestrator | 2026-02-08 03:52:08.893153 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-08 03:52:08.893160 | orchestrator | Sunday 08 February 2026 03:51:57 +0000 (0:00:00.337) 0:07:28.439 ******* 2026-02-08 03:52:08.893167 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:52:08.893173 | orchestrator | 2026-02-08 03:52:08.893180 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-08 03:52:08.893187 | orchestrator | Sunday 08 February 2026 03:51:58 +0000 (0:00:00.874) 0:07:29.313 ******* 2026-02-08 03:52:08.893194 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:52:08.893201 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:52:08.893207 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:52:08.893214 | orchestrator | 2026-02-08 03:52:08.893220 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-08 03:52:08.893227 | orchestrator | Sunday 08 February 2026 03:51:58 +0000 (0:00:00.662) 0:07:29.976 ******* 2026-02-08 03:52:08.893234 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:52:08.893240 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:52:08.893247 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:52:08.893254 | orchestrator | 2026-02-08 03:52:08.893261 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-08 03:52:08.893267 | orchestrator | Sunday 08 February 2026 03:52:01 +0000 (0:00:02.561) 0:07:32.537 ******* 2026-02-08 03:52:08.893274 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:52:08.893281 | orchestrator | 2026-02-08 03:52:08.893306 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-08 03:52:08.893317 | orchestrator | Sunday 08 February 2026 03:52:02 +0000 (0:00:00.896) 0:07:33.434 ******* 2026-02-08 03:52:08.893324 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:52:08.893330 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:52:08.893337 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:52:08.893343 | orchestrator | 2026-02-08 03:52:08.893350 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-08 03:52:08.893357 | orchestrator | Sunday 08 February 2026 03:52:03 +0000 (0:00:01.171) 0:07:34.605 ******* 2026-02-08 03:52:08.893363 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:52:08.893370 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:52:08.893376 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:52:08.893383 | orchestrator | 2026-02-08 03:52:08.893390 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-08 03:52:08.893396 | orchestrator | Sunday 08 February 2026 03:52:04 +0000 (0:00:01.183) 0:07:35.788 ******* 2026-02-08 03:52:08.893403 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:52:08.893409 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:52:08.893429 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:52:08.893440 | orchestrator | 2026-02-08 03:52:08.893451 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-08 03:52:08.893462 | orchestrator | Sunday 08 February 2026 03:52:07 +0000 (0:00:02.374) 0:07:38.163 ******* 2026-02-08 03:52:08.893473 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:08.893483 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:08.893493 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:08.893504 | orchestrator | 2026-02-08 03:52:08.893513 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-08 03:52:08.893523 | orchestrator | Sunday 08 February 2026 03:52:07 +0000 (0:00:00.345) 0:07:38.508 ******* 2026-02-08 03:52:08.893533 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:08.893544 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:08.893554 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:08.893564 | orchestrator | 2026-02-08 03:52:08.893574 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-08 03:52:08.893586 | orchestrator | Sunday 08 February 2026 03:52:07 +0000 (0:00:00.357) 0:07:38.866 ******* 2026-02-08 03:52:08.893596 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 03:52:08.893606 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-08 03:52:08.893617 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-08 03:52:08.893627 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-08 03:52:08.893637 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-08 03:52:08.893647 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-08 03:52:08.893659 | orchestrator | 2026-02-08 03:52:08.893672 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-08 03:52:08.893693 | orchestrator | Sunday 08 February 2026 03:52:08 +0000 (0:00:01.002) 0:07:39.869 ******* 2026-02-08 03:52:46.280569 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-08 03:52:46.280764 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-08 03:52:46.280780 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-08 03:52:46.280804 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-08 03:52:46.280812 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-08 03:52:46.280820 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-08 03:52:46.280829 | orchestrator | 2026-02-08 03:52:46.280838 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-08 03:52:46.280847 | orchestrator | Sunday 08 February 2026 03:52:11 +0000 (0:00:02.446) 0:07:42.315 ******* 2026-02-08 03:52:46.280856 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-08 03:52:46.280864 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-08 03:52:46.280872 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-08 03:52:46.280880 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-08 03:52:46.280888 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-08 03:52:46.280895 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-08 03:52:46.280908 | orchestrator | 2026-02-08 03:52:46.280923 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-08 03:52:46.280938 | orchestrator | Sunday 08 February 2026 03:52:14 +0000 (0:00:03.627) 0:07:45.943 ******* 2026-02-08 03:52:46.280952 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.280966 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:46.280978 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:52:46.280992 | orchestrator | 2026-02-08 03:52:46.281006 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-08 03:52:46.281021 | orchestrator | Sunday 08 February 2026 03:52:17 +0000 (0:00:03.018) 0:07:48.962 ******* 2026-02-08 03:52:46.281034 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281047 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:46.281060 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-08 03:52:46.281102 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:52:46.281117 | orchestrator | 2026-02-08 03:52:46.281130 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-08 03:52:46.281143 | orchestrator | Sunday 08 February 2026 03:52:30 +0000 (0:00:12.470) 0:08:01.433 ******* 2026-02-08 03:52:46.281155 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281168 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:46.281181 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:46.281194 | orchestrator | 2026-02-08 03:52:46.281208 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 03:52:46.281221 | orchestrator | Sunday 08 February 2026 03:52:31 +0000 (0:00:01.295) 0:08:02.728 ******* 2026-02-08 03:52:46.281234 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281247 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:46.281261 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:46.281275 | orchestrator | 2026-02-08 03:52:46.281288 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-08 03:52:46.281301 | orchestrator | Sunday 08 February 2026 03:52:32 +0000 (0:00:00.359) 0:08:03.088 ******* 2026-02-08 03:52:46.281537 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:52:46.281551 | orchestrator | 2026-02-08 03:52:46.281560 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-08 03:52:46.281568 | orchestrator | Sunday 08 February 2026 03:52:32 +0000 (0:00:00.904) 0:08:03.993 ******* 2026-02-08 03:52:46.281576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:52:46.281584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:52:46.281592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:52:46.281600 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281608 | orchestrator | 2026-02-08 03:52:46.281616 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-08 03:52:46.281624 | orchestrator | Sunday 08 February 2026 03:52:33 +0000 (0:00:00.518) 0:08:04.511 ******* 2026-02-08 03:52:46.281646 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281654 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:46.281671 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:46.281679 | orchestrator | 2026-02-08 03:52:46.281687 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-08 03:52:46.281695 | orchestrator | Sunday 08 February 2026 03:52:33 +0000 (0:00:00.332) 0:08:04.844 ******* 2026-02-08 03:52:46.281703 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281711 | orchestrator | 2026-02-08 03:52:46.281719 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-08 03:52:46.281727 | orchestrator | Sunday 08 February 2026 03:52:34 +0000 (0:00:00.253) 0:08:05.098 ******* 2026-02-08 03:52:46.281735 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281743 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:46.281751 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:46.281759 | orchestrator | 2026-02-08 03:52:46.281767 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-08 03:52:46.281775 | orchestrator | Sunday 08 February 2026 03:52:34 +0000 (0:00:00.616) 0:08:05.714 ******* 2026-02-08 03:52:46.281782 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281790 | orchestrator | 2026-02-08 03:52:46.281798 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-08 03:52:46.281806 | orchestrator | Sunday 08 February 2026 03:52:34 +0000 (0:00:00.274) 0:08:05.989 ******* 2026-02-08 03:52:46.281814 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281822 | orchestrator | 2026-02-08 03:52:46.281830 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-08 03:52:46.281838 | orchestrator | Sunday 08 February 2026 03:52:35 +0000 (0:00:00.268) 0:08:06.258 ******* 2026-02-08 03:52:46.281867 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281883 | orchestrator | 2026-02-08 03:52:46.281921 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-08 03:52:46.281936 | orchestrator | Sunday 08 February 2026 03:52:35 +0000 (0:00:00.131) 0:08:06.390 ******* 2026-02-08 03:52:46.281960 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.281973 | orchestrator | 2026-02-08 03:52:46.281986 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-08 03:52:46.282000 | orchestrator | Sunday 08 February 2026 03:52:35 +0000 (0:00:00.238) 0:08:06.628 ******* 2026-02-08 03:52:46.282075 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.282093 | orchestrator | 2026-02-08 03:52:46.282107 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-08 03:52:46.282122 | orchestrator | Sunday 08 February 2026 03:52:35 +0000 (0:00:00.251) 0:08:06.879 ******* 2026-02-08 03:52:46.282137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:52:46.282152 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:52:46.282167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:52:46.282181 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.282198 | orchestrator | 2026-02-08 03:52:46.282212 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-08 03:52:46.282226 | orchestrator | Sunday 08 February 2026 03:52:36 +0000 (0:00:00.448) 0:08:07.328 ******* 2026-02-08 03:52:46.282234 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.282242 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:46.282250 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:46.282258 | orchestrator | 2026-02-08 03:52:46.282270 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-08 03:52:46.282281 | orchestrator | Sunday 08 February 2026 03:52:36 +0000 (0:00:00.353) 0:08:07.681 ******* 2026-02-08 03:52:46.282299 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.282355 | orchestrator | 2026-02-08 03:52:46.282367 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-08 03:52:46.282380 | orchestrator | Sunday 08 February 2026 03:52:36 +0000 (0:00:00.242) 0:08:07.924 ******* 2026-02-08 03:52:46.282392 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.282403 | orchestrator | 2026-02-08 03:52:46.282415 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-08 03:52:46.282428 | orchestrator | 2026-02-08 03:52:46.282440 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 03:52:46.282454 | orchestrator | Sunday 08 February 2026 03:52:38 +0000 (0:00:01.398) 0:08:09.322 ******* 2026-02-08 03:52:46.282469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:52:46.282484 | orchestrator | 2026-02-08 03:52:46.282496 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 03:52:46.282504 | orchestrator | Sunday 08 February 2026 03:52:39 +0000 (0:00:01.373) 0:08:10.696 ******* 2026-02-08 03:52:46.282512 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:52:46.282520 | orchestrator | 2026-02-08 03:52:46.282533 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 03:52:46.282546 | orchestrator | Sunday 08 February 2026 03:52:41 +0000 (0:00:01.402) 0:08:12.099 ******* 2026-02-08 03:52:46.282559 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.282572 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:46.282585 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:46.282598 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:52:46.282611 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:52:46.282623 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:52:46.282636 | orchestrator | 2026-02-08 03:52:46.282663 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 03:52:46.282678 | orchestrator | Sunday 08 February 2026 03:52:42 +0000 (0:00:01.347) 0:08:13.447 ******* 2026-02-08 03:52:46.282692 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:52:46.282705 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:52:46.282715 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:52:46.282723 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:52:46.282731 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:52:46.282739 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:52:46.282746 | orchestrator | 2026-02-08 03:52:46.282754 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 03:52:46.282762 | orchestrator | Sunday 08 February 2026 03:52:43 +0000 (0:00:00.735) 0:08:14.182 ******* 2026-02-08 03:52:46.282770 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:52:46.282778 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:52:46.282785 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:52:46.282793 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:52:46.282801 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:52:46.282809 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:52:46.282816 | orchestrator | 2026-02-08 03:52:46.282824 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 03:52:46.282832 | orchestrator | Sunday 08 February 2026 03:52:44 +0000 (0:00:00.946) 0:08:15.128 ******* 2026-02-08 03:52:46.282840 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:52:46.282848 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:52:46.282855 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:52:46.282863 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:52:46.282871 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:52:46.282878 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:52:46.282886 | orchestrator | 2026-02-08 03:52:46.282894 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 03:52:46.282902 | orchestrator | Sunday 08 February 2026 03:52:44 +0000 (0:00:00.736) 0:08:15.865 ******* 2026-02-08 03:52:46.282909 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:52:46.282917 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:52:46.282925 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:52:46.282933 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:52:46.282941 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:52:46.282948 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:52:46.282956 | orchestrator | 2026-02-08 03:52:46.282977 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 03:53:16.997941 | orchestrator | Sunday 08 February 2026 03:52:46 +0000 (0:00:01.395) 0:08:17.260 ******* 2026-02-08 03:53:16.998112 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:16.998126 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:16.998133 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:16.998139 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:53:16.998146 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:53:16.998152 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:53:16.998158 | orchestrator | 2026-02-08 03:53:16.998166 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 03:53:16.998172 | orchestrator | Sunday 08 February 2026 03:52:46 +0000 (0:00:00.647) 0:08:17.908 ******* 2026-02-08 03:53:16.998179 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:16.998185 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:16.998191 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:16.998197 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:53:16.998203 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:53:16.998209 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:53:16.998216 | orchestrator | 2026-02-08 03:53:16.998222 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 03:53:16.998228 | orchestrator | Sunday 08 February 2026 03:52:47 +0000 (0:00:00.925) 0:08:18.833 ******* 2026-02-08 03:53:16.998235 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:16.998357 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:16.998367 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:16.998373 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:53:16.998379 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:53:16.998385 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:53:16.998392 | orchestrator | 2026-02-08 03:53:16.998402 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 03:53:16.998413 | orchestrator | Sunday 08 February 2026 03:52:48 +0000 (0:00:01.101) 0:08:19.935 ******* 2026-02-08 03:53:16.998422 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:16.998432 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:16.998442 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:16.998453 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:53:16.998463 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:53:16.998474 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:53:16.998486 | orchestrator | 2026-02-08 03:53:16.998496 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 03:53:16.998506 | orchestrator | Sunday 08 February 2026 03:52:50 +0000 (0:00:01.441) 0:08:21.376 ******* 2026-02-08 03:53:16.998515 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:16.998526 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:16.998541 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:16.998553 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:53:16.998563 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:53:16.998573 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:53:16.998584 | orchestrator | 2026-02-08 03:53:16.998593 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 03:53:16.998605 | orchestrator | Sunday 08 February 2026 03:52:51 +0000 (0:00:00.681) 0:08:22.057 ******* 2026-02-08 03:53:16.998614 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:16.998624 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:16.998634 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:16.998643 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:53:16.998653 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:53:16.998663 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:53:16.998674 | orchestrator | 2026-02-08 03:53:16.998685 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 03:53:16.998696 | orchestrator | Sunday 08 February 2026 03:52:52 +0000 (0:00:00.999) 0:08:23.057 ******* 2026-02-08 03:53:16.998708 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:16.998718 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:16.998729 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:16.998740 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:53:16.998749 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:53:16.998760 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:53:16.998770 | orchestrator | 2026-02-08 03:53:16.998780 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 03:53:16.998789 | orchestrator | Sunday 08 February 2026 03:52:52 +0000 (0:00:00.646) 0:08:23.703 ******* 2026-02-08 03:53:16.998800 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:16.998810 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:16.998820 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:16.998829 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:53:16.998841 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:53:16.998851 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:53:16.998862 | orchestrator | 2026-02-08 03:53:16.998872 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 03:53:16.998881 | orchestrator | Sunday 08 February 2026 03:52:53 +0000 (0:00:00.924) 0:08:24.627 ******* 2026-02-08 03:53:16.998891 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:16.998902 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:16.998912 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:16.998923 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:53:16.998933 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:53:16.998943 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:53:16.998962 | orchestrator | 2026-02-08 03:53:16.998969 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 03:53:16.998975 | orchestrator | Sunday 08 February 2026 03:52:54 +0000 (0:00:00.774) 0:08:25.402 ******* 2026-02-08 03:53:16.998981 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:16.998987 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:16.998993 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:16.998999 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:53:16.999005 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:53:16.999011 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:53:16.999017 | orchestrator | 2026-02-08 03:53:16.999023 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 03:53:16.999029 | orchestrator | Sunday 08 February 2026 03:52:55 +0000 (0:00:00.960) 0:08:26.363 ******* 2026-02-08 03:53:16.999035 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:16.999041 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:16.999047 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:16.999054 | orchestrator | skipping: [testbed-node-0] 2026-02-08 03:53:16.999060 | orchestrator | skipping: [testbed-node-1] 2026-02-08 03:53:16.999066 | orchestrator | skipping: [testbed-node-2] 2026-02-08 03:53:16.999072 | orchestrator | 2026-02-08 03:53:16.999100 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 03:53:16.999108 | orchestrator | Sunday 08 February 2026 03:52:55 +0000 (0:00:00.633) 0:08:26.996 ******* 2026-02-08 03:53:16.999114 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:16.999120 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:16.999126 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:16.999133 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:53:16.999139 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:53:16.999145 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:53:16.999151 | orchestrator | 2026-02-08 03:53:16.999158 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 03:53:16.999164 | orchestrator | Sunday 08 February 2026 03:52:56 +0000 (0:00:00.955) 0:08:27.952 ******* 2026-02-08 03:53:16.999170 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:16.999176 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:16.999182 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:16.999188 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:53:16.999194 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:53:16.999200 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:53:16.999206 | orchestrator | 2026-02-08 03:53:16.999213 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 03:53:16.999219 | orchestrator | Sunday 08 February 2026 03:52:57 +0000 (0:00:00.684) 0:08:28.637 ******* 2026-02-08 03:53:16.999225 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:16.999231 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:16.999237 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:16.999244 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:53:16.999250 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:53:16.999256 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:53:16.999262 | orchestrator | 2026-02-08 03:53:16.999268 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-08 03:53:16.999274 | orchestrator | Sunday 08 February 2026 03:52:59 +0000 (0:00:01.454) 0:08:30.092 ******* 2026-02-08 03:53:16.999281 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:53:16.999287 | orchestrator | 2026-02-08 03:53:16.999293 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-08 03:53:16.999300 | orchestrator | Sunday 08 February 2026 03:53:02 +0000 (0:00:03.850) 0:08:33.943 ******* 2026-02-08 03:53:16.999306 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:53:16.999312 | orchestrator | 2026-02-08 03:53:16.999339 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-08 03:53:16.999347 | orchestrator | Sunday 08 February 2026 03:53:05 +0000 (0:00:02.373) 0:08:36.317 ******* 2026-02-08 03:53:16.999358 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:53:16.999364 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:53:16.999370 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:53:16.999376 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:53:16.999382 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:53:16.999389 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:53:16.999395 | orchestrator | 2026-02-08 03:53:16.999401 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-08 03:53:16.999407 | orchestrator | Sunday 08 February 2026 03:53:07 +0000 (0:00:01.794) 0:08:38.111 ******* 2026-02-08 03:53:16.999413 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:53:16.999419 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:53:16.999425 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:53:16.999432 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:53:16.999437 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:53:16.999444 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:53:16.999450 | orchestrator | 2026-02-08 03:53:16.999456 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-08 03:53:16.999462 | orchestrator | Sunday 08 February 2026 03:53:08 +0000 (0:00:01.269) 0:08:39.380 ******* 2026-02-08 03:53:16.999469 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:53:16.999477 | orchestrator | 2026-02-08 03:53:16.999483 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-08 03:53:16.999489 | orchestrator | Sunday 08 February 2026 03:53:09 +0000 (0:00:01.348) 0:08:40.729 ******* 2026-02-08 03:53:16.999496 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:53:16.999502 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:53:16.999508 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:53:16.999514 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:53:16.999520 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:53:16.999526 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:53:16.999532 | orchestrator | 2026-02-08 03:53:16.999538 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-08 03:53:16.999544 | orchestrator | Sunday 08 February 2026 03:53:11 +0000 (0:00:01.496) 0:08:42.225 ******* 2026-02-08 03:53:16.999550 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:53:16.999556 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:53:16.999562 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:53:16.999568 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:53:16.999574 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:53:16.999580 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:53:16.999586 | orchestrator | 2026-02-08 03:53:16.999592 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-08 03:53:16.999599 | orchestrator | Sunday 08 February 2026 03:53:14 +0000 (0:00:03.727) 0:08:45.953 ******* 2026-02-08 03:53:16.999605 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 03:53:16.999611 | orchestrator | 2026-02-08 03:53:16.999617 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-08 03:53:16.999623 | orchestrator | Sunday 08 February 2026 03:53:16 +0000 (0:00:01.346) 0:08:47.300 ******* 2026-02-08 03:53:16.999663 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:16.999671 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:16.999677 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:16.999683 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:53:16.999689 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:53:16.999695 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:53:16.999701 | orchestrator | 2026-02-08 03:53:16.999712 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-08 03:53:44.363838 | orchestrator | Sunday 08 February 2026 03:53:16 +0000 (0:00:00.673) 0:08:47.973 ******* 2026-02-08 03:53:44.363943 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:53:44.363954 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:53:44.363960 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:53:44.363967 | orchestrator | changed: [testbed-node-1] 2026-02-08 03:53:44.363973 | orchestrator | changed: [testbed-node-2] 2026-02-08 03:53:44.363979 | orchestrator | changed: [testbed-node-0] 2026-02-08 03:53:44.363985 | orchestrator | 2026-02-08 03:53:44.363992 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-08 03:53:44.363998 | orchestrator | Sunday 08 February 2026 03:53:20 +0000 (0:00:03.318) 0:08:51.292 ******* 2026-02-08 03:53:44.364004 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364011 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364017 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364023 | orchestrator | ok: [testbed-node-0] 2026-02-08 03:53:44.364029 | orchestrator | ok: [testbed-node-1] 2026-02-08 03:53:44.364035 | orchestrator | ok: [testbed-node-2] 2026-02-08 03:53:44.364041 | orchestrator | 2026-02-08 03:53:44.364048 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-08 03:53:44.364054 | orchestrator | 2026-02-08 03:53:44.364060 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 03:53:44.364066 | orchestrator | Sunday 08 February 2026 03:53:21 +0000 (0:00:00.946) 0:08:52.238 ******* 2026-02-08 03:53:44.364073 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:53:44.364081 | orchestrator | 2026-02-08 03:53:44.364087 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 03:53:44.364093 | orchestrator | Sunday 08 February 2026 03:53:22 +0000 (0:00:00.876) 0:08:53.115 ******* 2026-02-08 03:53:44.364099 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:53:44.364106 | orchestrator | 2026-02-08 03:53:44.364112 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 03:53:44.364118 | orchestrator | Sunday 08 February 2026 03:53:22 +0000 (0:00:00.568) 0:08:53.684 ******* 2026-02-08 03:53:44.364124 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.364130 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.364136 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.364142 | orchestrator | 2026-02-08 03:53:44.364149 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 03:53:44.364155 | orchestrator | Sunday 08 February 2026 03:53:23 +0000 (0:00:00.646) 0:08:54.330 ******* 2026-02-08 03:53:44.364161 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364167 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364173 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364179 | orchestrator | 2026-02-08 03:53:44.364185 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 03:53:44.364193 | orchestrator | Sunday 08 February 2026 03:53:24 +0000 (0:00:00.728) 0:08:55.058 ******* 2026-02-08 03:53:44.364203 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364212 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364221 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364231 | orchestrator | 2026-02-08 03:53:44.364241 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 03:53:44.364251 | orchestrator | Sunday 08 February 2026 03:53:24 +0000 (0:00:00.764) 0:08:55.823 ******* 2026-02-08 03:53:44.364261 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364271 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364280 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364291 | orchestrator | 2026-02-08 03:53:44.364302 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 03:53:44.364313 | orchestrator | Sunday 08 February 2026 03:53:25 +0000 (0:00:01.075) 0:08:56.898 ******* 2026-02-08 03:53:44.364322 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.364383 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.364392 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.364400 | orchestrator | 2026-02-08 03:53:44.364407 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 03:53:44.364414 | orchestrator | Sunday 08 February 2026 03:53:26 +0000 (0:00:00.362) 0:08:57.261 ******* 2026-02-08 03:53:44.364421 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.364429 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.364436 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.364443 | orchestrator | 2026-02-08 03:53:44.364451 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 03:53:44.364458 | orchestrator | Sunday 08 February 2026 03:53:26 +0000 (0:00:00.338) 0:08:57.600 ******* 2026-02-08 03:53:44.364465 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.364472 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.364478 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.364486 | orchestrator | 2026-02-08 03:53:44.364493 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 03:53:44.364500 | orchestrator | Sunday 08 February 2026 03:53:26 +0000 (0:00:00.338) 0:08:57.938 ******* 2026-02-08 03:53:44.364507 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364514 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364521 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364529 | orchestrator | 2026-02-08 03:53:44.364536 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 03:53:44.364544 | orchestrator | Sunday 08 February 2026 03:53:27 +0000 (0:00:01.041) 0:08:58.980 ******* 2026-02-08 03:53:44.364551 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364558 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364565 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364572 | orchestrator | 2026-02-08 03:53:44.364580 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 03:53:44.364587 | orchestrator | Sunday 08 February 2026 03:53:28 +0000 (0:00:00.729) 0:08:59.709 ******* 2026-02-08 03:53:44.364594 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.364601 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.364609 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.364616 | orchestrator | 2026-02-08 03:53:44.364623 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 03:53:44.364653 | orchestrator | Sunday 08 February 2026 03:53:29 +0000 (0:00:00.337) 0:09:00.047 ******* 2026-02-08 03:53:44.364660 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.364667 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.364674 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.364681 | orchestrator | 2026-02-08 03:53:44.364689 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 03:53:44.364696 | orchestrator | Sunday 08 February 2026 03:53:29 +0000 (0:00:00.344) 0:09:00.391 ******* 2026-02-08 03:53:44.364703 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364710 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364717 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364725 | orchestrator | 2026-02-08 03:53:44.364732 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 03:53:44.364739 | orchestrator | Sunday 08 February 2026 03:53:30 +0000 (0:00:00.686) 0:09:01.077 ******* 2026-02-08 03:53:44.364746 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364754 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364761 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364768 | orchestrator | 2026-02-08 03:53:44.364774 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 03:53:44.364780 | orchestrator | Sunday 08 February 2026 03:53:30 +0000 (0:00:00.397) 0:09:01.475 ******* 2026-02-08 03:53:44.364787 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364793 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364799 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364809 | orchestrator | 2026-02-08 03:53:44.364816 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 03:53:44.364822 | orchestrator | Sunday 08 February 2026 03:53:30 +0000 (0:00:00.331) 0:09:01.806 ******* 2026-02-08 03:53:44.364828 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.364834 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.364841 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.364847 | orchestrator | 2026-02-08 03:53:44.364853 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 03:53:44.364859 | orchestrator | Sunday 08 February 2026 03:53:31 +0000 (0:00:00.316) 0:09:02.122 ******* 2026-02-08 03:53:44.364865 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.364871 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.364878 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.364884 | orchestrator | 2026-02-08 03:53:44.364890 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 03:53:44.364896 | orchestrator | Sunday 08 February 2026 03:53:31 +0000 (0:00:00.490) 0:09:02.613 ******* 2026-02-08 03:53:44.364902 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.364908 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.364914 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.364920 | orchestrator | 2026-02-08 03:53:44.364926 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 03:53:44.364932 | orchestrator | Sunday 08 February 2026 03:53:31 +0000 (0:00:00.301) 0:09:02.915 ******* 2026-02-08 03:53:44.364938 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364945 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364951 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364957 | orchestrator | 2026-02-08 03:53:44.364963 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 03:53:44.364969 | orchestrator | Sunday 08 February 2026 03:53:32 +0000 (0:00:00.362) 0:09:03.277 ******* 2026-02-08 03:53:44.364975 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:53:44.364981 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:53:44.364987 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:53:44.364993 | orchestrator | 2026-02-08 03:53:44.365000 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-08 03:53:44.365006 | orchestrator | Sunday 08 February 2026 03:53:33 +0000 (0:00:00.801) 0:09:04.078 ******* 2026-02-08 03:53:44.365012 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:53:44.365018 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:53:44.365024 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-08 03:53:44.365031 | orchestrator | 2026-02-08 03:53:44.365037 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-08 03:53:44.365043 | orchestrator | Sunday 08 February 2026 03:53:33 +0000 (0:00:00.395) 0:09:04.473 ******* 2026-02-08 03:53:44.365050 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:53:44.365056 | orchestrator | 2026-02-08 03:53:44.365062 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-08 03:53:44.365068 | orchestrator | Sunday 08 February 2026 03:53:35 +0000 (0:00:02.013) 0:09:06.487 ******* 2026-02-08 03:53:44.365076 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-08 03:53:44.365085 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:53:44.365091 | orchestrator | 2026-02-08 03:53:44.365097 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-08 03:53:44.365103 | orchestrator | Sunday 08 February 2026 03:53:35 +0000 (0:00:00.252) 0:09:06.739 ******* 2026-02-08 03:53:44.365111 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-08 03:53:44.365127 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-08 03:53:44.365134 | orchestrator | 2026-02-08 03:53:44.365145 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-08 03:54:16.008433 | orchestrator | Sunday 08 February 2026 03:53:44 +0000 (0:00:08.604) 0:09:15.343 ******* 2026-02-08 03:54:16.008586 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 03:54:16.008615 | orchestrator | 2026-02-08 03:54:16.008638 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-08 03:54:16.008688 | orchestrator | Sunday 08 February 2026 03:53:47 +0000 (0:00:03.558) 0:09:18.901 ******* 2026-02-08 03:54:16.008710 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:54:16.008752 | orchestrator | 2026-02-08 03:54:16.008783 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-08 03:54:16.008795 | orchestrator | Sunday 08 February 2026 03:53:48 +0000 (0:00:00.838) 0:09:19.740 ******* 2026-02-08 03:54:16.008806 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-08 03:54:16.008817 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-08 03:54:16.008829 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-08 03:54:16.008842 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-08 03:54:16.008855 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-08 03:54:16.008872 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-08 03:54:16.008891 | orchestrator | 2026-02-08 03:54:16.008910 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-08 03:54:16.008931 | orchestrator | Sunday 08 February 2026 03:53:49 +0000 (0:00:01.041) 0:09:20.782 ******* 2026-02-08 03:54:16.008951 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:54:16.008970 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 03:54:16.009011 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 03:54:16.009031 | orchestrator | 2026-02-08 03:54:16.009051 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-08 03:54:16.009070 | orchestrator | Sunday 08 February 2026 03:53:51 +0000 (0:00:02.010) 0:09:22.793 ******* 2026-02-08 03:54:16.009089 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-08 03:54:16.009128 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-08 03:54:16.009147 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 03:54:16.009166 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:16.009185 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-08 03:54:16.009206 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:16.009225 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-08 03:54:16.009264 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-08 03:54:16.009284 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:16.009303 | orchestrator | 2026-02-08 03:54:16.009339 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-08 03:54:16.009386 | orchestrator | Sunday 08 February 2026 03:53:53 +0000 (0:00:01.207) 0:09:24.000 ******* 2026-02-08 03:54:16.009406 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:16.009425 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:16.009444 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:16.009462 | orchestrator | 2026-02-08 03:54:16.009480 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-08 03:54:16.009536 | orchestrator | Sunday 08 February 2026 03:53:55 +0000 (0:00:02.978) 0:09:26.979 ******* 2026-02-08 03:54:16.009556 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:16.009575 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:16.009593 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:16.009629 | orchestrator | 2026-02-08 03:54:16.009641 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-08 03:54:16.009652 | orchestrator | Sunday 08 February 2026 03:53:56 +0000 (0:00:00.339) 0:09:27.319 ******* 2026-02-08 03:54:16.009663 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:54:16.009677 | orchestrator | 2026-02-08 03:54:16.009696 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-08 03:54:16.009727 | orchestrator | Sunday 08 February 2026 03:53:57 +0000 (0:00:00.835) 0:09:28.155 ******* 2026-02-08 03:54:16.009745 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:54:16.009763 | orchestrator | 2026-02-08 03:54:16.009780 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-08 03:54:16.009813 | orchestrator | Sunday 08 February 2026 03:53:57 +0000 (0:00:00.570) 0:09:28.725 ******* 2026-02-08 03:54:16.009832 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:16.009849 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:16.009867 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:16.009886 | orchestrator | 2026-02-08 03:54:16.009905 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-08 03:54:16.009923 | orchestrator | Sunday 08 February 2026 03:53:58 +0000 (0:00:01.268) 0:09:29.994 ******* 2026-02-08 03:54:16.009960 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:16.009972 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:16.009982 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:16.010005 | orchestrator | 2026-02-08 03:54:16.010085 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-08 03:54:16.010101 | orchestrator | Sunday 08 February 2026 03:54:00 +0000 (0:00:01.517) 0:09:31.512 ******* 2026-02-08 03:54:16.010117 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:16.010135 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:16.010153 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:16.010185 | orchestrator | 2026-02-08 03:54:16.010203 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-08 03:54:16.010265 | orchestrator | Sunday 08 February 2026 03:54:02 +0000 (0:00:01.869) 0:09:33.381 ******* 2026-02-08 03:54:16.010287 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:16.010306 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:16.010325 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:16.010337 | orchestrator | 2026-02-08 03:54:16.010375 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-08 03:54:16.010392 | orchestrator | Sunday 08 February 2026 03:54:04 +0000 (0:00:02.063) 0:09:35.445 ******* 2026-02-08 03:54:16.010403 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:16.010415 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:16.010425 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:16.010451 | orchestrator | 2026-02-08 03:54:16.010462 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 03:54:16.010473 | orchestrator | Sunday 08 February 2026 03:54:06 +0000 (0:00:01.679) 0:09:37.124 ******* 2026-02-08 03:54:16.010484 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:16.010494 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:16.010505 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:16.010516 | orchestrator | 2026-02-08 03:54:16.010527 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-08 03:54:16.010537 | orchestrator | Sunday 08 February 2026 03:54:06 +0000 (0:00:00.768) 0:09:37.892 ******* 2026-02-08 03:54:16.010548 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:54:16.010573 | orchestrator | 2026-02-08 03:54:16.010584 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-08 03:54:16.010595 | orchestrator | Sunday 08 February 2026 03:54:07 +0000 (0:00:00.849) 0:09:38.742 ******* 2026-02-08 03:54:16.010606 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:16.010616 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:16.010627 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:16.010638 | orchestrator | 2026-02-08 03:54:16.010648 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-08 03:54:16.010659 | orchestrator | Sunday 08 February 2026 03:54:08 +0000 (0:00:00.425) 0:09:39.167 ******* 2026-02-08 03:54:16.010669 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:16.010680 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:16.010704 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:16.010726 | orchestrator | 2026-02-08 03:54:16.010736 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-08 03:54:16.010747 | orchestrator | Sunday 08 February 2026 03:54:09 +0000 (0:00:01.212) 0:09:40.379 ******* 2026-02-08 03:54:16.010758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:54:16.010769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:54:16.010780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:54:16.010790 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:16.010801 | orchestrator | 2026-02-08 03:54:16.010812 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-08 03:54:16.010823 | orchestrator | Sunday 08 February 2026 03:54:10 +0000 (0:00:00.948) 0:09:41.328 ******* 2026-02-08 03:54:16.010833 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:16.010844 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:16.010855 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:16.010885 | orchestrator | 2026-02-08 03:54:16.010905 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-08 03:54:16.010923 | orchestrator | 2026-02-08 03:54:16.010941 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 03:54:16.010959 | orchestrator | Sunday 08 February 2026 03:54:11 +0000 (0:00:00.915) 0:09:42.244 ******* 2026-02-08 03:54:16.010976 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:54:16.010995 | orchestrator | 2026-02-08 03:54:16.011012 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 03:54:16.011029 | orchestrator | Sunday 08 February 2026 03:54:11 +0000 (0:00:00.573) 0:09:42.817 ******* 2026-02-08 03:54:16.011046 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:54:16.011063 | orchestrator | 2026-02-08 03:54:16.011080 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 03:54:16.011097 | orchestrator | Sunday 08 February 2026 03:54:12 +0000 (0:00:00.894) 0:09:43.712 ******* 2026-02-08 03:54:16.011113 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:16.011131 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:16.011149 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:16.011188 | orchestrator | 2026-02-08 03:54:16.011206 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 03:54:16.011223 | orchestrator | Sunday 08 February 2026 03:54:13 +0000 (0:00:00.371) 0:09:44.083 ******* 2026-02-08 03:54:16.011240 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:16.011256 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:16.011292 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:16.011311 | orchestrator | 2026-02-08 03:54:16.011329 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 03:54:16.011392 | orchestrator | Sunday 08 February 2026 03:54:13 +0000 (0:00:00.690) 0:09:44.774 ******* 2026-02-08 03:54:16.011430 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:16.011448 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:16.011464 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:16.011482 | orchestrator | 2026-02-08 03:54:16.011500 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 03:54:16.011517 | orchestrator | Sunday 08 February 2026 03:54:14 +0000 (0:00:01.068) 0:09:45.842 ******* 2026-02-08 03:54:16.011535 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:16.011553 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:16.011570 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:16.011588 | orchestrator | 2026-02-08 03:54:16.011627 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 03:54:16.011646 | orchestrator | Sunday 08 February 2026 03:54:15 +0000 (0:00:00.769) 0:09:46.612 ******* 2026-02-08 03:54:16.011664 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:16.011700 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:37.685845 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:37.685944 | orchestrator | 2026-02-08 03:54:37.685955 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 03:54:37.685964 | orchestrator | Sunday 08 February 2026 03:54:15 +0000 (0:00:00.381) 0:09:46.993 ******* 2026-02-08 03:54:37.685970 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:37.685977 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:37.685984 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:37.685990 | orchestrator | 2026-02-08 03:54:37.685997 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 03:54:37.686004 | orchestrator | Sunday 08 February 2026 03:54:16 +0000 (0:00:00.337) 0:09:47.331 ******* 2026-02-08 03:54:37.686010 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:37.686072 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:37.686080 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:37.686086 | orchestrator | 2026-02-08 03:54:37.686093 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 03:54:37.686104 | orchestrator | Sunday 08 February 2026 03:54:16 +0000 (0:00:00.631) 0:09:47.963 ******* 2026-02-08 03:54:37.686118 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:37.686131 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:37.686144 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:37.686156 | orchestrator | 2026-02-08 03:54:37.686167 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 03:54:37.686179 | orchestrator | Sunday 08 February 2026 03:54:17 +0000 (0:00:00.739) 0:09:48.702 ******* 2026-02-08 03:54:37.686192 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:37.686203 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:37.686212 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:37.686219 | orchestrator | 2026-02-08 03:54:37.686225 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 03:54:37.686232 | orchestrator | Sunday 08 February 2026 03:54:18 +0000 (0:00:00.753) 0:09:49.456 ******* 2026-02-08 03:54:37.686238 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:37.686244 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:37.686252 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:37.686263 | orchestrator | 2026-02-08 03:54:37.686274 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 03:54:37.686285 | orchestrator | Sunday 08 February 2026 03:54:18 +0000 (0:00:00.372) 0:09:49.828 ******* 2026-02-08 03:54:37.686297 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:37.686307 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:37.686313 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:37.686320 | orchestrator | 2026-02-08 03:54:37.686326 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 03:54:37.686337 | orchestrator | Sunday 08 February 2026 03:54:19 +0000 (0:00:00.605) 0:09:50.434 ******* 2026-02-08 03:54:37.686348 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:37.686435 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:37.686445 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:37.686452 | orchestrator | 2026-02-08 03:54:37.686461 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 03:54:37.686473 | orchestrator | Sunday 08 February 2026 03:54:19 +0000 (0:00:00.389) 0:09:50.823 ******* 2026-02-08 03:54:37.686484 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:37.686495 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:37.686507 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:37.686518 | orchestrator | 2026-02-08 03:54:37.686529 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 03:54:37.686541 | orchestrator | Sunday 08 February 2026 03:54:20 +0000 (0:00:00.365) 0:09:51.189 ******* 2026-02-08 03:54:37.686551 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:37.686559 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:37.686567 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:37.686574 | orchestrator | 2026-02-08 03:54:37.686581 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 03:54:37.686589 | orchestrator | Sunday 08 February 2026 03:54:20 +0000 (0:00:00.397) 0:09:51.587 ******* 2026-02-08 03:54:37.686596 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:37.686603 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:37.686610 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:37.686618 | orchestrator | 2026-02-08 03:54:37.686625 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 03:54:37.686632 | orchestrator | Sunday 08 February 2026 03:54:21 +0000 (0:00:00.614) 0:09:52.201 ******* 2026-02-08 03:54:37.686639 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:37.686646 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:37.686654 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:37.686661 | orchestrator | 2026-02-08 03:54:37.686668 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 03:54:37.686675 | orchestrator | Sunday 08 February 2026 03:54:21 +0000 (0:00:00.342) 0:09:52.544 ******* 2026-02-08 03:54:37.686683 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:37.686690 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:37.686697 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:37.686704 | orchestrator | 2026-02-08 03:54:37.686712 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 03:54:37.686719 | orchestrator | Sunday 08 February 2026 03:54:21 +0000 (0:00:00.344) 0:09:52.889 ******* 2026-02-08 03:54:37.686725 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:37.686731 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:37.686737 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:37.686743 | orchestrator | 2026-02-08 03:54:37.686749 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 03:54:37.686755 | orchestrator | Sunday 08 February 2026 03:54:22 +0000 (0:00:00.387) 0:09:53.276 ******* 2026-02-08 03:54:37.686761 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:54:37.686767 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:54:37.686773 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:54:37.686779 | orchestrator | 2026-02-08 03:54:37.686785 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-08 03:54:37.686791 | orchestrator | Sunday 08 February 2026 03:54:23 +0000 (0:00:00.869) 0:09:54.145 ******* 2026-02-08 03:54:37.686798 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:54:37.686806 | orchestrator | 2026-02-08 03:54:37.686834 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-08 03:54:37.686841 | orchestrator | Sunday 08 February 2026 03:54:23 +0000 (0:00:00.611) 0:09:54.757 ******* 2026-02-08 03:54:37.686847 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:54:37.686854 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 03:54:37.686861 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 03:54:37.686875 | orchestrator | 2026-02-08 03:54:37.686881 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-08 03:54:37.686888 | orchestrator | Sunday 08 February 2026 03:54:26 +0000 (0:00:02.348) 0:09:57.105 ******* 2026-02-08 03:54:37.686894 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-08 03:54:37.686901 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 03:54:37.686907 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:37.686913 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-08 03:54:37.686919 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-08 03:54:37.686925 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:37.686931 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-08 03:54:37.686937 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-08 03:54:37.686943 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:37.686949 | orchestrator | 2026-02-08 03:54:37.686956 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-08 03:54:37.686962 | orchestrator | Sunday 08 February 2026 03:54:27 +0000 (0:00:01.536) 0:09:58.641 ******* 2026-02-08 03:54:37.686968 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:54:37.686974 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:54:37.686980 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:54:37.686986 | orchestrator | 2026-02-08 03:54:37.686992 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-08 03:54:37.686999 | orchestrator | Sunday 08 February 2026 03:54:27 +0000 (0:00:00.354) 0:09:58.995 ******* 2026-02-08 03:54:37.687005 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:54:37.687011 | orchestrator | 2026-02-08 03:54:37.687017 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-08 03:54:37.687023 | orchestrator | Sunday 08 February 2026 03:54:28 +0000 (0:00:00.600) 0:09:59.596 ******* 2026-02-08 03:54:37.687031 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 03:54:37.687039 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 03:54:37.687045 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 03:54:37.687052 | orchestrator | 2026-02-08 03:54:37.687058 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-08 03:54:37.687064 | orchestrator | Sunday 08 February 2026 03:54:29 +0000 (0:00:01.142) 0:10:00.739 ******* 2026-02-08 03:54:37.687070 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:54:37.687077 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-08 03:54:37.687083 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:54:37.687089 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-08 03:54:37.687095 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:54:37.687101 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-08 03:54:37.687108 | orchestrator | 2026-02-08 03:54:37.687114 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-08 03:54:37.687120 | orchestrator | Sunday 08 February 2026 03:54:33 +0000 (0:00:04.196) 0:10:04.935 ******* 2026-02-08 03:54:37.687126 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:54:37.687136 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:54:37.687142 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 03:54:37.687149 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 03:54:37.687155 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:54:37.687161 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 03:54:37.687167 | orchestrator | 2026-02-08 03:54:37.687173 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-08 03:54:37.687179 | orchestrator | Sunday 08 February 2026 03:54:36 +0000 (0:00:02.224) 0:10:07.160 ******* 2026-02-08 03:54:37.687185 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-08 03:54:37.687191 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:54:37.687198 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-08 03:54:37.687204 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:54:37.687210 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-08 03:54:37.687216 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:54:37.687222 | orchestrator | 2026-02-08 03:54:37.687236 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-08 03:55:22.015679 | orchestrator | Sunday 08 February 2026 03:54:37 +0000 (0:00:01.504) 0:10:08.664 ******* 2026-02-08 03:55:22.015835 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-08 03:55:22.015853 | orchestrator | 2026-02-08 03:55:22.015865 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-08 03:55:22.015877 | orchestrator | Sunday 08 February 2026 03:54:37 +0000 (0:00:00.249) 0:10:08.914 ******* 2026-02-08 03:55:22.015889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.015904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.015916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.015927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.015938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.015950 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:22.015962 | orchestrator | 2026-02-08 03:55:22.015973 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-08 03:55:22.015984 | orchestrator | Sunday 08 February 2026 03:54:38 +0000 (0:00:00.650) 0:10:09.565 ******* 2026-02-08 03:55:22.015995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.016006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.016017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.016028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.016039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 03:55:22.016050 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:22.016061 | orchestrator | 2026-02-08 03:55:22.016072 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-08 03:55:22.016083 | orchestrator | Sunday 08 February 2026 03:54:39 +0000 (0:00:00.663) 0:10:10.229 ******* 2026-02-08 03:55:22.016129 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 03:55:22.016143 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 03:55:22.016154 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 03:55:22.016165 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 03:55:22.016176 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 03:55:22.016187 | orchestrator | 2026-02-08 03:55:22.016198 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-08 03:55:22.016208 | orchestrator | Sunday 08 February 2026 03:55:08 +0000 (0:00:29.278) 0:10:39.507 ******* 2026-02-08 03:55:22.016219 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:22.016230 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:22.016241 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:22.016252 | orchestrator | 2026-02-08 03:55:22.016262 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-08 03:55:22.016273 | orchestrator | Sunday 08 February 2026 03:55:08 +0000 (0:00:00.359) 0:10:39.867 ******* 2026-02-08 03:55:22.016284 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:22.016295 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:22.016305 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:22.016316 | orchestrator | 2026-02-08 03:55:22.016327 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-08 03:55:22.016337 | orchestrator | Sunday 08 February 2026 03:55:09 +0000 (0:00:00.343) 0:10:40.210 ******* 2026-02-08 03:55:22.016349 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:55:22.016360 | orchestrator | 2026-02-08 03:55:22.016370 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-08 03:55:22.016408 | orchestrator | Sunday 08 February 2026 03:55:10 +0000 (0:00:00.899) 0:10:41.109 ******* 2026-02-08 03:55:22.016440 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:55:22.016452 | orchestrator | 2026-02-08 03:55:22.016483 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-08 03:55:22.016495 | orchestrator | Sunday 08 February 2026 03:55:10 +0000 (0:00:00.568) 0:10:41.678 ******* 2026-02-08 03:55:22.016506 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:55:22.016516 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:55:22.016527 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:55:22.016538 | orchestrator | 2026-02-08 03:55:22.016549 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-08 03:55:22.016560 | orchestrator | Sunday 08 February 2026 03:55:12 +0000 (0:00:01.604) 0:10:43.283 ******* 2026-02-08 03:55:22.016571 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:55:22.016581 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:55:22.016592 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:55:22.016603 | orchestrator | 2026-02-08 03:55:22.016613 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-08 03:55:22.016624 | orchestrator | Sunday 08 February 2026 03:55:13 +0000 (0:00:01.160) 0:10:44.444 ******* 2026-02-08 03:55:22.016635 | orchestrator | changed: [testbed-node-3] 2026-02-08 03:55:22.016645 | orchestrator | changed: [testbed-node-4] 2026-02-08 03:55:22.016656 | orchestrator | changed: [testbed-node-5] 2026-02-08 03:55:22.016677 | orchestrator | 2026-02-08 03:55:22.016688 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-08 03:55:22.016699 | orchestrator | Sunday 08 February 2026 03:55:15 +0000 (0:00:01.875) 0:10:46.320 ******* 2026-02-08 03:55:22.016709 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 03:55:22.016720 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 03:55:22.016731 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 03:55:22.016742 | orchestrator | 2026-02-08 03:55:22.016753 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 03:55:22.016764 | orchestrator | Sunday 08 February 2026 03:55:18 +0000 (0:00:02.881) 0:10:49.201 ******* 2026-02-08 03:55:22.016775 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:22.016785 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:22.016796 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:22.016806 | orchestrator | 2026-02-08 03:55:22.016817 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-08 03:55:22.016828 | orchestrator | Sunday 08 February 2026 03:55:18 +0000 (0:00:00.410) 0:10:49.612 ******* 2026-02-08 03:55:22.016839 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:55:22.016850 | orchestrator | 2026-02-08 03:55:22.016860 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-08 03:55:22.016871 | orchestrator | Sunday 08 February 2026 03:55:19 +0000 (0:00:00.935) 0:10:50.547 ******* 2026-02-08 03:55:22.016882 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:22.016893 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:22.016903 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:22.016914 | orchestrator | 2026-02-08 03:55:22.016925 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-08 03:55:22.016936 | orchestrator | Sunday 08 February 2026 03:55:19 +0000 (0:00:00.406) 0:10:50.953 ******* 2026-02-08 03:55:22.016946 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:22.016957 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:22.016968 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:22.016978 | orchestrator | 2026-02-08 03:55:22.016989 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-08 03:55:22.017000 | orchestrator | Sunday 08 February 2026 03:55:20 +0000 (0:00:00.444) 0:10:51.398 ******* 2026-02-08 03:55:22.017010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:55:22.017021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:55:22.017032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:55:22.017043 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:22.017053 | orchestrator | 2026-02-08 03:55:22.017064 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-08 03:55:22.017075 | orchestrator | Sunday 08 February 2026 03:55:21 +0000 (0:00:01.039) 0:10:52.437 ******* 2026-02-08 03:55:22.017086 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:22.017096 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:22.017107 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:22.017118 | orchestrator | 2026-02-08 03:55:22.017128 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:55:22.017139 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-08 03:55:22.017151 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-08 03:55:22.017162 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-08 03:55:22.017183 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-08 03:55:22.017193 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-08 03:55:22.017217 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-08 03:55:22.531800 | orchestrator | 2026-02-08 03:55:22.531967 | orchestrator | 2026-02-08 03:55:22.531993 | orchestrator | 2026-02-08 03:55:22.532013 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:55:22.532027 | orchestrator | Sunday 08 February 2026 03:55:21 +0000 (0:00:00.557) 0:10:52.995 ******* 2026-02-08 03:55:22.532038 | orchestrator | =============================================================================== 2026-02-08 03:55:22.532049 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 57.06s 2026-02-08 03:55:22.532060 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.44s 2026-02-08 03:55:22.532072 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.28s 2026-02-08 03:55:22.532083 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.10s 2026-02-08 03:55:22.532093 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.81s 2026-02-08 03:55:22.532104 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.08s 2026-02-08 03:55:22.532115 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.47s 2026-02-08 03:55:22.532126 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.20s 2026-02-08 03:55:22.532136 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.69s 2026-02-08 03:55:22.532147 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.60s 2026-02-08 03:55:22.532158 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.42s 2026-02-08 03:55:22.532168 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.36s 2026-02-08 03:55:22.532179 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.90s 2026-02-08 03:55:22.532190 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.20s 2026-02-08 03:55:22.532204 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.85s 2026-02-08 03:55:22.532223 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.73s 2026-02-08 03:55:22.532254 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.63s 2026-02-08 03:55:22.532274 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.56s 2026-02-08 03:55:22.532292 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.52s 2026-02-08 03:55:22.532312 | orchestrator | ceph-handler : Restart the ceph-crash service --------------------------- 3.32s 2026-02-08 03:55:25.191167 | orchestrator | 2026-02-08 03:55:25 | INFO  | Task e95c5d90-4f65-430c-8153-7857f260c177 (ceph-pools) was prepared for execution. 2026-02-08 03:55:25.191277 | orchestrator | 2026-02-08 03:55:25 | INFO  | It takes a moment until task e95c5d90-4f65-430c-8153-7857f260c177 (ceph-pools) has been started and output is visible here. 2026-02-08 03:55:39.689087 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-08 03:55:39.689199 | orchestrator | 2.16.14 2026-02-08 03:55:39.689216 | orchestrator | 2026-02-08 03:55:39.689229 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-08 03:55:39.689241 | orchestrator | 2026-02-08 03:55:39.689253 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 03:55:39.689291 | orchestrator | Sunday 08 February 2026 03:55:29 +0000 (0:00:00.591) 0:00:00.591 ******* 2026-02-08 03:55:39.689302 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:55:39.689314 | orchestrator | 2026-02-08 03:55:39.689325 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 03:55:39.689336 | orchestrator | Sunday 08 February 2026 03:55:30 +0000 (0:00:00.600) 0:00:01.192 ******* 2026-02-08 03:55:39.689347 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:39.689357 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:39.689368 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:39.689379 | orchestrator | 2026-02-08 03:55:39.689492 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 03:55:39.689505 | orchestrator | Sunday 08 February 2026 03:55:30 +0000 (0:00:00.653) 0:00:01.845 ******* 2026-02-08 03:55:39.689516 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:39.689526 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:39.689537 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:39.689548 | orchestrator | 2026-02-08 03:55:39.689559 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 03:55:39.689569 | orchestrator | Sunday 08 February 2026 03:55:31 +0000 (0:00:00.263) 0:00:02.109 ******* 2026-02-08 03:55:39.689580 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:39.689591 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:39.689604 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:39.689616 | orchestrator | 2026-02-08 03:55:39.689629 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 03:55:39.689642 | orchestrator | Sunday 08 February 2026 03:55:31 +0000 (0:00:00.782) 0:00:02.892 ******* 2026-02-08 03:55:39.689655 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:39.689667 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:39.689679 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:39.689692 | orchestrator | 2026-02-08 03:55:39.689705 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 03:55:39.689717 | orchestrator | Sunday 08 February 2026 03:55:32 +0000 (0:00:00.312) 0:00:03.204 ******* 2026-02-08 03:55:39.689729 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:39.689742 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:39.689754 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:39.689766 | orchestrator | 2026-02-08 03:55:39.689795 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 03:55:39.689808 | orchestrator | Sunday 08 February 2026 03:55:32 +0000 (0:00:00.355) 0:00:03.560 ******* 2026-02-08 03:55:39.689820 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:39.689832 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:39.689845 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:39.689857 | orchestrator | 2026-02-08 03:55:39.689870 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 03:55:39.689883 | orchestrator | Sunday 08 February 2026 03:55:32 +0000 (0:00:00.340) 0:00:03.900 ******* 2026-02-08 03:55:39.689896 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:39.689910 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:39.689923 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:39.689934 | orchestrator | 2026-02-08 03:55:39.689945 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 03:55:39.689956 | orchestrator | Sunday 08 February 2026 03:55:33 +0000 (0:00:00.642) 0:00:04.543 ******* 2026-02-08 03:55:39.689968 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:39.689986 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:39.690003 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:39.690091 | orchestrator | 2026-02-08 03:55:39.690111 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 03:55:39.690132 | orchestrator | Sunday 08 February 2026 03:55:33 +0000 (0:00:00.338) 0:00:04.881 ******* 2026-02-08 03:55:39.690152 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 03:55:39.690190 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:55:39.690211 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:55:39.690226 | orchestrator | 2026-02-08 03:55:39.690237 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 03:55:39.690248 | orchestrator | Sunday 08 February 2026 03:55:34 +0000 (0:00:00.750) 0:00:05.632 ******* 2026-02-08 03:55:39.690259 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:39.690269 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:39.690280 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:39.690290 | orchestrator | 2026-02-08 03:55:39.690301 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 03:55:39.690312 | orchestrator | Sunday 08 February 2026 03:55:35 +0000 (0:00:00.518) 0:00:06.151 ******* 2026-02-08 03:55:39.690323 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 03:55:39.690333 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:55:39.690344 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:55:39.690355 | orchestrator | 2026-02-08 03:55:39.690366 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 03:55:39.690376 | orchestrator | Sunday 08 February 2026 03:55:37 +0000 (0:00:02.270) 0:00:08.421 ******* 2026-02-08 03:55:39.690407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 03:55:39.690420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 03:55:39.690430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 03:55:39.690441 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:39.690453 | orchestrator | 2026-02-08 03:55:39.690485 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 03:55:39.690505 | orchestrator | Sunday 08 February 2026 03:55:38 +0000 (0:00:00.686) 0:00:09.108 ******* 2026-02-08 03:55:39.690526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 03:55:39.690553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 03:55:39.690574 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 03:55:39.690589 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:39.690600 | orchestrator | 2026-02-08 03:55:39.690611 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 03:55:39.690630 | orchestrator | Sunday 08 February 2026 03:55:39 +0000 (0:00:01.099) 0:00:10.208 ******* 2026-02-08 03:55:39.690651 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:39.690683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:39.690716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:39.690737 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:39.690756 | orchestrator | 2026-02-08 03:55:39.690775 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 03:55:39.690790 | orchestrator | Sunday 08 February 2026 03:55:39 +0000 (0:00:00.172) 0:00:10.380 ******* 2026-02-08 03:55:39.690803 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '814c3ba0cfa5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 03:55:36.159424', 'end': '2026-02-08 03:55:36.203836', 'delta': '0:00:00.044412', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['814c3ba0cfa5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 03:55:39.690818 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd108d94fad94', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 03:55:36.711723', 'end': '2026-02-08 03:55:36.764791', 'delta': '0:00:00.053068', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d108d94fad94'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 03:55:39.690840 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '83b6b87b68f7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 03:55:37.277768', 'end': '2026-02-08 03:55:37.317987', 'delta': '0:00:00.040219', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['83b6b87b68f7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 03:55:46.892012 | orchestrator | 2026-02-08 03:55:46.892196 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 03:55:46.892229 | orchestrator | Sunday 08 February 2026 03:55:39 +0000 (0:00:00.216) 0:00:10.596 ******* 2026-02-08 03:55:46.892251 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:46.892271 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:55:46.892292 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:55:46.892328 | orchestrator | 2026-02-08 03:55:46.892348 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 03:55:46.892367 | orchestrator | Sunday 08 February 2026 03:55:40 +0000 (0:00:00.510) 0:00:11.107 ******* 2026-02-08 03:55:46.892424 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-08 03:55:46.892541 | orchestrator | 2026-02-08 03:55:46.892556 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 03:55:46.892600 | orchestrator | Sunday 08 February 2026 03:55:41 +0000 (0:00:01.674) 0:00:12.782 ******* 2026-02-08 03:55:46.892615 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.892628 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.892640 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.892653 | orchestrator | 2026-02-08 03:55:46.892667 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 03:55:46.892680 | orchestrator | Sunday 08 February 2026 03:55:42 +0000 (0:00:00.306) 0:00:13.089 ******* 2026-02-08 03:55:46.892692 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.892703 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.892714 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.892725 | orchestrator | 2026-02-08 03:55:46.892750 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 03:55:46.892762 | orchestrator | Sunday 08 February 2026 03:55:43 +0000 (0:00:00.897) 0:00:13.986 ******* 2026-02-08 03:55:46.892772 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.892783 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.892794 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.892805 | orchestrator | 2026-02-08 03:55:46.892816 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 03:55:46.892827 | orchestrator | Sunday 08 February 2026 03:55:43 +0000 (0:00:00.326) 0:00:14.312 ******* 2026-02-08 03:55:46.892838 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:55:46.892850 | orchestrator | 2026-02-08 03:55:46.892861 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 03:55:46.892872 | orchestrator | Sunday 08 February 2026 03:55:43 +0000 (0:00:00.135) 0:00:14.448 ******* 2026-02-08 03:55:46.892883 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.892894 | orchestrator | 2026-02-08 03:55:46.892905 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 03:55:46.892916 | orchestrator | Sunday 08 February 2026 03:55:43 +0000 (0:00:00.242) 0:00:14.690 ******* 2026-02-08 03:55:46.892927 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.892938 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.892949 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.892960 | orchestrator | 2026-02-08 03:55:46.892972 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 03:55:46.892982 | orchestrator | Sunday 08 February 2026 03:55:44 +0000 (0:00:00.324) 0:00:15.015 ******* 2026-02-08 03:55:46.892993 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.893004 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.893015 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.893026 | orchestrator | 2026-02-08 03:55:46.893037 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 03:55:46.893048 | orchestrator | Sunday 08 February 2026 03:55:44 +0000 (0:00:00.333) 0:00:15.349 ******* 2026-02-08 03:55:46.893059 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.893070 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.893081 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.893092 | orchestrator | 2026-02-08 03:55:46.893103 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 03:55:46.893114 | orchestrator | Sunday 08 February 2026 03:55:45 +0000 (0:00:00.591) 0:00:15.940 ******* 2026-02-08 03:55:46.893125 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.893136 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.893147 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.893158 | orchestrator | 2026-02-08 03:55:46.893169 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 03:55:46.893180 | orchestrator | Sunday 08 February 2026 03:55:45 +0000 (0:00:00.402) 0:00:16.343 ******* 2026-02-08 03:55:46.893191 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.893203 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.893222 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.893233 | orchestrator | 2026-02-08 03:55:46.893244 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 03:55:46.893256 | orchestrator | Sunday 08 February 2026 03:55:45 +0000 (0:00:00.374) 0:00:16.717 ******* 2026-02-08 03:55:46.893266 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.893278 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.893297 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.893316 | orchestrator | 2026-02-08 03:55:46.893334 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 03:55:46.893354 | orchestrator | Sunday 08 February 2026 03:55:46 +0000 (0:00:00.544) 0:00:17.262 ******* 2026-02-08 03:55:46.893373 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:46.893384 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:46.893427 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:46.893438 | orchestrator | 2026-02-08 03:55:46.893450 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 03:55:46.893461 | orchestrator | Sunday 08 February 2026 03:55:46 +0000 (0:00:00.328) 0:00:17.590 ******* 2026-02-08 03:55:46.893496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:46.893512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:46.893532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:46.893564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:46.893576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:46.893588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:46.893607 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:46.893619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:46.893631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:46.893652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.022106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.022243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.022299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.022354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.022374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.022428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.022447 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.022466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.022577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.022600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.022619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.022648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.294916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.295030 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.295050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.295063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.295121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.295135 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.295150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.295159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.295175 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:47.295184 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:47.295194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.295211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.295225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.295240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.517688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.517782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.517793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.517873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.517883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.517889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-08 03:55:47.517915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.517932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.517948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.517956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.517965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-08 03:55:47.517974 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:55:47.517982 | orchestrator | 2026-02-08 03:55:47.517989 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 03:55:47.517997 | orchestrator | Sunday 08 February 2026 03:55:47 +0000 (0:00:00.738) 0:00:18.329 ******* 2026-02-08 03:55:47.518011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.652964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653104 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653139 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653167 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653193 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653205 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653221 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653242 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.653315 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757148 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757210 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757269 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757301 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757316 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757349 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.757378 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.929359 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.929551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.929589 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:55:47.929620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.929631 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.929643 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.929653 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:55:47.929663 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.929685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.929696 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:47.929714 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133190 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133314 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133331 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133507 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133577 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133591 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:55:48.133611 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:56:01.054478 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-08-02-32-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-08 03:56:01.054625 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:56:01.054686 | orchestrator | 2026-02-08 03:56:01.054707 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 03:56:01.054729 | orchestrator | Sunday 08 February 2026 03:55:48 +0000 (0:00:00.711) 0:00:19.040 ******* 2026-02-08 03:56:01.054749 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:56:01.054771 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:56:01.054790 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:56:01.054810 | orchestrator | 2026-02-08 03:56:01.054831 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 03:56:01.054853 | orchestrator | Sunday 08 February 2026 03:55:49 +0000 (0:00:00.941) 0:00:19.982 ******* 2026-02-08 03:56:01.054874 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:56:01.054896 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:56:01.054917 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:56:01.054939 | orchestrator | 2026-02-08 03:56:01.054961 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 03:56:01.054983 | orchestrator | Sunday 08 February 2026 03:55:49 +0000 (0:00:00.365) 0:00:20.347 ******* 2026-02-08 03:56:01.055005 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:56:01.055027 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:56:01.055049 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:56:01.055070 | orchestrator | 2026-02-08 03:56:01.055092 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 03:56:01.055116 | orchestrator | Sunday 08 February 2026 03:55:50 +0000 (0:00:00.674) 0:00:21.022 ******* 2026-02-08 03:56:01.055137 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.055158 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:56:01.055178 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:56:01.055198 | orchestrator | 2026-02-08 03:56:01.055219 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 03:56:01.055240 | orchestrator | Sunday 08 February 2026 03:55:50 +0000 (0:00:00.328) 0:00:21.351 ******* 2026-02-08 03:56:01.055260 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.055280 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:56:01.055297 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:56:01.055317 | orchestrator | 2026-02-08 03:56:01.055358 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 03:56:01.055380 | orchestrator | Sunday 08 February 2026 03:55:51 +0000 (0:00:00.786) 0:00:22.137 ******* 2026-02-08 03:56:01.055453 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.055476 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:56:01.055496 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:56:01.055514 | orchestrator | 2026-02-08 03:56:01.055534 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 03:56:01.055554 | orchestrator | Sunday 08 February 2026 03:55:51 +0000 (0:00:00.354) 0:00:22.492 ******* 2026-02-08 03:56:01.055574 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-08 03:56:01.055595 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-08 03:56:01.055614 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-08 03:56:01.055634 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-08 03:56:01.055653 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-08 03:56:01.055673 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-08 03:56:01.055694 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-08 03:56:01.055714 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-08 03:56:01.055731 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-08 03:56:01.055750 | orchestrator | 2026-02-08 03:56:01.055767 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 03:56:01.055786 | orchestrator | Sunday 08 February 2026 03:55:52 +0000 (0:00:01.133) 0:00:23.625 ******* 2026-02-08 03:56:01.055804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 03:56:01.055823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 03:56:01.055858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 03:56:01.055877 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.055894 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-08 03:56:01.055912 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-08 03:56:01.055930 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-08 03:56:01.055949 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:56:01.055967 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 03:56:01.055984 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 03:56:01.056002 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 03:56:01.056021 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:56:01.056040 | orchestrator | 2026-02-08 03:56:01.056060 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 03:56:01.056080 | orchestrator | Sunday 08 February 2026 03:55:53 +0000 (0:00:00.393) 0:00:24.019 ******* 2026-02-08 03:56:01.056128 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 03:56:01.056150 | orchestrator | 2026-02-08 03:56:01.056169 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 03:56:01.056188 | orchestrator | Sunday 08 February 2026 03:55:53 +0000 (0:00:00.876) 0:00:24.896 ******* 2026-02-08 03:56:01.056205 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.056223 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:56:01.056241 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:56:01.056261 | orchestrator | 2026-02-08 03:56:01.056278 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 03:56:01.056296 | orchestrator | Sunday 08 February 2026 03:55:54 +0000 (0:00:00.340) 0:00:25.237 ******* 2026-02-08 03:56:01.056315 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.056333 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:56:01.056351 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:56:01.056371 | orchestrator | 2026-02-08 03:56:01.056390 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 03:56:01.056437 | orchestrator | Sunday 08 February 2026 03:55:54 +0000 (0:00:00.348) 0:00:25.585 ******* 2026-02-08 03:56:01.056449 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.056460 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:56:01.056471 | orchestrator | skipping: [testbed-node-5] 2026-02-08 03:56:01.056482 | orchestrator | 2026-02-08 03:56:01.056493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 03:56:01.056504 | orchestrator | Sunday 08 February 2026 03:55:55 +0000 (0:00:00.601) 0:00:26.187 ******* 2026-02-08 03:56:01.056514 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:56:01.056525 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:56:01.056536 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:56:01.056547 | orchestrator | 2026-02-08 03:56:01.056558 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 03:56:01.056569 | orchestrator | Sunday 08 February 2026 03:55:55 +0000 (0:00:00.509) 0:00:26.697 ******* 2026-02-08 03:56:01.056580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:56:01.056591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:56:01.056602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:56:01.056613 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.056624 | orchestrator | 2026-02-08 03:56:01.056634 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 03:56:01.056645 | orchestrator | Sunday 08 February 2026 03:55:56 +0000 (0:00:00.391) 0:00:27.089 ******* 2026-02-08 03:56:01.056656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:56:01.056667 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:56:01.056690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:56:01.056702 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.056712 | orchestrator | 2026-02-08 03:56:01.056723 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 03:56:01.056745 | orchestrator | Sunday 08 February 2026 03:55:56 +0000 (0:00:00.421) 0:00:27.511 ******* 2026-02-08 03:56:01.056756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 03:56:01.056767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 03:56:01.056778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 03:56:01.056788 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:56:01.056799 | orchestrator | 2026-02-08 03:56:01.056810 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 03:56:01.056821 | orchestrator | Sunday 08 February 2026 03:55:56 +0000 (0:00:00.395) 0:00:27.906 ******* 2026-02-08 03:56:01.056832 | orchestrator | ok: [testbed-node-3] 2026-02-08 03:56:01.056843 | orchestrator | ok: [testbed-node-4] 2026-02-08 03:56:01.056854 | orchestrator | ok: [testbed-node-5] 2026-02-08 03:56:01.056864 | orchestrator | 2026-02-08 03:56:01.056875 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 03:56:01.056886 | orchestrator | Sunday 08 February 2026 03:55:57 +0000 (0:00:00.347) 0:00:28.253 ******* 2026-02-08 03:56:01.056897 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 03:56:01.056908 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 03:56:01.056918 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 03:56:01.056929 | orchestrator | 2026-02-08 03:56:01.056940 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 03:56:01.056951 | orchestrator | Sunday 08 February 2026 03:55:58 +0000 (0:00:00.844) 0:00:29.098 ******* 2026-02-08 03:56:01.056962 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 03:56:01.056973 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:56:01.056984 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:56:01.056994 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 03:56:01.057005 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 03:56:01.057016 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 03:56:01.057027 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 03:56:01.057038 | orchestrator | 2026-02-08 03:56:01.057049 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 03:56:01.057060 | orchestrator | Sunday 08 February 2026 03:55:59 +0000 (0:00:00.868) 0:00:29.966 ******* 2026-02-08 03:56:01.057071 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 03:56:01.057093 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 03:57:36.410920 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 03:57:36.411005 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 03:57:36.411013 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 03:57:36.411018 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 03:57:36.411022 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 03:57:36.411026 | orchestrator | 2026-02-08 03:57:36.411031 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-08 03:57:36.411036 | orchestrator | Sunday 08 February 2026 03:56:01 +0000 (0:00:01.989) 0:00:31.956 ******* 2026-02-08 03:57:36.411040 | orchestrator | skipping: [testbed-node-3] 2026-02-08 03:57:36.411062 | orchestrator | skipping: [testbed-node-4] 2026-02-08 03:57:36.411066 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-08 03:57:36.411070 | orchestrator | 2026-02-08 03:57:36.411074 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-08 03:57:36.411078 | orchestrator | Sunday 08 February 2026 03:56:01 +0000 (0:00:00.429) 0:00:32.386 ******* 2026-02-08 03:57:36.411083 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-08 03:57:36.411090 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-08 03:57:36.411093 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-08 03:57:36.411097 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-08 03:57:36.411112 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-08 03:57:36.411116 | orchestrator | 2026-02-08 03:57:36.411119 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-08 03:57:36.411123 | orchestrator | Sunday 08 February 2026 03:56:45 +0000 (0:00:43.566) 0:01:15.953 ******* 2026-02-08 03:57:36.411127 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411131 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411134 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411138 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411142 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411145 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411149 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-08 03:57:36.411153 | orchestrator | 2026-02-08 03:57:36.411157 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-08 03:57:36.411160 | orchestrator | Sunday 08 February 2026 03:57:07 +0000 (0:00:22.878) 0:01:38.832 ******* 2026-02-08 03:57:36.411164 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411168 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411171 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411175 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411179 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411183 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411186 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 03:57:36.411194 | orchestrator | 2026-02-08 03:57:36.411198 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-08 03:57:36.411202 | orchestrator | Sunday 08 February 2026 03:57:19 +0000 (0:00:11.341) 0:01:50.173 ******* 2026-02-08 03:57:36.411205 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411218 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 03:57:36.411222 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 03:57:36.411226 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411230 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 03:57:36.411233 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 03:57:36.411237 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411241 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 03:57:36.411245 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 03:57:36.411249 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411252 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 03:57:36.411256 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 03:57:36.411260 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411263 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 03:57:36.411267 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 03:57:36.411271 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 03:57:36.411274 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 03:57:36.411278 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 03:57:36.411282 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-08 03:57:36.411286 | orchestrator | 2026-02-08 03:57:36.411290 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:57:36.411294 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-08 03:57:36.411298 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-08 03:57:36.411303 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-08 03:57:36.411307 | orchestrator | 2026-02-08 03:57:36.411311 | orchestrator | 2026-02-08 03:57:36.411314 | orchestrator | 2026-02-08 03:57:36.411318 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:57:36.411324 | orchestrator | Sunday 08 February 2026 03:57:35 +0000 (0:00:16.734) 0:02:06.907 ******* 2026-02-08 03:57:36.411328 | orchestrator | =============================================================================== 2026-02-08 03:57:36.411332 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.57s 2026-02-08 03:57:36.411335 | orchestrator | generate keys ---------------------------------------------------------- 22.88s 2026-02-08 03:57:36.411339 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.73s 2026-02-08 03:57:36.411343 | orchestrator | get keys from monitors ------------------------------------------------- 11.34s 2026-02-08 03:57:36.411347 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.27s 2026-02-08 03:57:36.411350 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.99s 2026-02-08 03:57:36.411357 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.67s 2026-02-08 03:57:36.411361 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.13s 2026-02-08 03:57:36.411364 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.10s 2026-02-08 03:57:36.411369 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.94s 2026-02-08 03:57:36.411372 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.90s 2026-02-08 03:57:36.411376 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.88s 2026-02-08 03:57:36.411380 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.87s 2026-02-08 03:57:36.411383 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.84s 2026-02-08 03:57:36.411387 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.79s 2026-02-08 03:57:36.411391 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2026-02-08 03:57:36.411395 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.75s 2026-02-08 03:57:36.411398 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.74s 2026-02-08 03:57:36.411402 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.71s 2026-02-08 03:57:36.411406 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2026-02-08 03:57:39.011006 | orchestrator | 2026-02-08 03:57:39 | INFO  | Task fd565879-bcb9-40b3-b976-713c174dc81b (copy-ceph-keys) was prepared for execution. 2026-02-08 03:57:39.011127 | orchestrator | 2026-02-08 03:57:39 | INFO  | It takes a moment until task fd565879-bcb9-40b3-b976-713c174dc81b (copy-ceph-keys) has been started and output is visible here. 2026-02-08 03:58:18.514394 | orchestrator | 2026-02-08 03:58:18.514558 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-08 03:58:18.514571 | orchestrator | 2026-02-08 03:58:18.514578 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-08 03:58:18.514584 | orchestrator | Sunday 08 February 2026 03:57:43 +0000 (0:00:00.174) 0:00:00.174 ******* 2026-02-08 03:58:18.514592 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-08 03:58:18.514599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514606 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514612 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-08 03:58:18.514618 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514624 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-08 03:58:18.514630 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-08 03:58:18.514636 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-08 03:58:18.514642 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-08 03:58:18.514648 | orchestrator | 2026-02-08 03:58:18.514654 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-08 03:58:18.514660 | orchestrator | Sunday 08 February 2026 03:57:48 +0000 (0:00:04.794) 0:00:04.968 ******* 2026-02-08 03:58:18.514666 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-08 03:58:18.514672 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514698 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514706 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-08 03:58:18.514710 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514714 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-08 03:58:18.514730 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-08 03:58:18.514734 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-08 03:58:18.514737 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-08 03:58:18.514741 | orchestrator | 2026-02-08 03:58:18.514745 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-08 03:58:18.514749 | orchestrator | Sunday 08 February 2026 03:57:52 +0000 (0:00:04.292) 0:00:09.260 ******* 2026-02-08 03:58:18.514754 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-08 03:58:18.514758 | orchestrator | 2026-02-08 03:58:18.514761 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-08 03:58:18.514765 | orchestrator | Sunday 08 February 2026 03:57:53 +0000 (0:00:01.012) 0:00:10.272 ******* 2026-02-08 03:58:18.514769 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-08 03:58:18.514774 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514778 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514782 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-08 03:58:18.514785 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514789 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-08 03:58:18.514793 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-08 03:58:18.514797 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-08 03:58:18.514801 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-08 03:58:18.514805 | orchestrator | 2026-02-08 03:58:18.514808 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-08 03:58:18.514812 | orchestrator | Sunday 08 February 2026 03:58:07 +0000 (0:00:13.976) 0:00:24.249 ******* 2026-02-08 03:58:18.514816 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-08 03:58:18.514819 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-08 03:58:18.514824 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-08 03:58:18.514828 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-08 03:58:18.514842 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-08 03:58:18.514846 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-08 03:58:18.514849 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-08 03:58:18.514853 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-08 03:58:18.514857 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-08 03:58:18.514861 | orchestrator | 2026-02-08 03:58:18.514864 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-08 03:58:18.514872 | orchestrator | Sunday 08 February 2026 03:58:10 +0000 (0:00:03.264) 0:00:27.514 ******* 2026-02-08 03:58:18.514877 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-08 03:58:18.514881 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514884 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514888 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-08 03:58:18.514892 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-08 03:58:18.514896 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-08 03:58:18.514899 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-08 03:58:18.514903 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-08 03:58:18.514907 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-08 03:58:18.514910 | orchestrator | 2026-02-08 03:58:18.514914 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:58:18.514918 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 03:58:18.514922 | orchestrator | 2026-02-08 03:58:18.514926 | orchestrator | 2026-02-08 03:58:18.514931 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:58:18.514934 | orchestrator | Sunday 08 February 2026 03:58:18 +0000 (0:00:07.194) 0:00:34.708 ******* 2026-02-08 03:58:18.514938 | orchestrator | =============================================================================== 2026-02-08 03:58:18.514942 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.98s 2026-02-08 03:58:18.514946 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.19s 2026-02-08 03:58:18.514951 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.79s 2026-02-08 03:58:18.514958 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.29s 2026-02-08 03:58:18.514962 | orchestrator | Check if target directories exist --------------------------------------- 3.26s 2026-02-08 03:58:18.514967 | orchestrator | Create share directory -------------------------------------------------- 1.01s 2026-02-08 03:58:31.013830 | orchestrator | 2026-02-08 03:58:31 | INFO  | Task c519f6b5-7f3e-4d00-8e3e-255cc826eafe (cephclient) was prepared for execution. 2026-02-08 03:58:31.013946 | orchestrator | 2026-02-08 03:58:31 | INFO  | It takes a moment until task c519f6b5-7f3e-4d00-8e3e-255cc826eafe (cephclient) has been started and output is visible here. 2026-02-08 03:59:32.671800 | orchestrator | 2026-02-08 03:59:32.671927 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-08 03:59:32.671947 | orchestrator | 2026-02-08 03:59:32.671961 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-08 03:59:32.671976 | orchestrator | Sunday 08 February 2026 03:58:35 +0000 (0:00:00.250) 0:00:00.250 ******* 2026-02-08 03:59:32.671990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-08 03:59:32.672005 | orchestrator | 2026-02-08 03:59:32.672019 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-08 03:59:32.672032 | orchestrator | Sunday 08 February 2026 03:58:36 +0000 (0:00:00.231) 0:00:00.481 ******* 2026-02-08 03:59:32.672045 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-08 03:59:32.672058 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-08 03:59:32.672073 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-08 03:59:32.672086 | orchestrator | 2026-02-08 03:59:32.672100 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-08 03:59:32.672146 | orchestrator | Sunday 08 February 2026 03:58:37 +0000 (0:00:01.273) 0:00:01.754 ******* 2026-02-08 03:59:32.672162 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-08 03:59:32.672175 | orchestrator | 2026-02-08 03:59:32.672188 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-08 03:59:32.672201 | orchestrator | Sunday 08 February 2026 03:58:38 +0000 (0:00:01.511) 0:00:03.266 ******* 2026-02-08 03:59:32.672214 | orchestrator | changed: [testbed-manager] 2026-02-08 03:59:32.672228 | orchestrator | 2026-02-08 03:59:32.672241 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-08 03:59:32.672254 | orchestrator | Sunday 08 February 2026 03:58:39 +0000 (0:00:00.975) 0:00:04.242 ******* 2026-02-08 03:59:32.672267 | orchestrator | changed: [testbed-manager] 2026-02-08 03:59:32.672279 | orchestrator | 2026-02-08 03:59:32.672292 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-08 03:59:32.672305 | orchestrator | Sunday 08 February 2026 03:58:40 +0000 (0:00:00.970) 0:00:05.212 ******* 2026-02-08 03:59:32.672318 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-08 03:59:32.672332 | orchestrator | ok: [testbed-manager] 2026-02-08 03:59:32.672347 | orchestrator | 2026-02-08 03:59:32.672361 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-08 03:59:32.672376 | orchestrator | Sunday 08 February 2026 03:59:22 +0000 (0:00:41.509) 0:00:46.721 ******* 2026-02-08 03:59:32.672386 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-08 03:59:32.672396 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-08 03:59:32.672405 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-08 03:59:32.672414 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-08 03:59:32.672423 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-08 03:59:32.672432 | orchestrator | 2026-02-08 03:59:32.672441 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-08 03:59:32.672450 | orchestrator | Sunday 08 February 2026 03:59:26 +0000 (0:00:04.220) 0:00:50.942 ******* 2026-02-08 03:59:32.672459 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-08 03:59:32.672470 | orchestrator | 2026-02-08 03:59:32.672479 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-08 03:59:32.672514 | orchestrator | Sunday 08 February 2026 03:59:26 +0000 (0:00:00.476) 0:00:51.419 ******* 2026-02-08 03:59:32.672524 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:59:32.672533 | orchestrator | 2026-02-08 03:59:32.672542 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-08 03:59:32.672550 | orchestrator | Sunday 08 February 2026 03:59:27 +0000 (0:00:00.157) 0:00:51.576 ******* 2026-02-08 03:59:32.672558 | orchestrator | skipping: [testbed-manager] 2026-02-08 03:59:32.672565 | orchestrator | 2026-02-08 03:59:32.672573 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-08 03:59:32.672581 | orchestrator | Sunday 08 February 2026 03:59:27 +0000 (0:00:00.577) 0:00:52.154 ******* 2026-02-08 03:59:32.672589 | orchestrator | changed: [testbed-manager] 2026-02-08 03:59:32.672596 | orchestrator | 2026-02-08 03:59:32.672604 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-08 03:59:32.672612 | orchestrator | Sunday 08 February 2026 03:59:29 +0000 (0:00:01.577) 0:00:53.731 ******* 2026-02-08 03:59:32.672619 | orchestrator | changed: [testbed-manager] 2026-02-08 03:59:32.672627 | orchestrator | 2026-02-08 03:59:32.672635 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-08 03:59:32.672642 | orchestrator | Sunday 08 February 2026 03:59:30 +0000 (0:00:00.802) 0:00:54.534 ******* 2026-02-08 03:59:32.672650 | orchestrator | changed: [testbed-manager] 2026-02-08 03:59:32.672658 | orchestrator | 2026-02-08 03:59:32.672665 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-08 03:59:32.672697 | orchestrator | Sunday 08 February 2026 03:59:30 +0000 (0:00:00.602) 0:00:55.137 ******* 2026-02-08 03:59:32.672706 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-08 03:59:32.672713 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-08 03:59:32.672721 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-08 03:59:32.672729 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-08 03:59:32.672737 | orchestrator | 2026-02-08 03:59:32.672744 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 03:59:32.672753 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 03:59:32.672761 | orchestrator | 2026-02-08 03:59:32.672769 | orchestrator | 2026-02-08 03:59:32.672794 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 03:59:32.672803 | orchestrator | Sunday 08 February 2026 03:59:32 +0000 (0:00:01.576) 0:00:56.713 ******* 2026-02-08 03:59:32.672810 | orchestrator | =============================================================================== 2026-02-08 03:59:32.672818 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.51s 2026-02-08 03:59:32.672826 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.22s 2026-02-08 03:59:32.672834 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.58s 2026-02-08 03:59:32.672841 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.58s 2026-02-08 03:59:32.672849 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.51s 2026-02-08 03:59:32.672857 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.27s 2026-02-08 03:59:32.672865 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.98s 2026-02-08 03:59:32.672872 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2026-02-08 03:59:32.672880 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2026-02-08 03:59:32.672887 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2026-02-08 03:59:32.672895 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.58s 2026-02-08 03:59:32.672903 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-02-08 03:59:32.672911 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-02-08 03:59:32.672924 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-02-08 03:59:35.333141 | orchestrator | 2026-02-08 03:59:35 | INFO  | Task 6adeb005-1426-4ad1-89a0-70255d16ec01 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-08 03:59:35.333262 | orchestrator | 2026-02-08 03:59:35 | INFO  | It takes a moment until task 6adeb005-1426-4ad1-89a0-70255d16ec01 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-08 04:00:55.924053 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-08 04:00:55.924146 | orchestrator | 2.16.14 2026-02-08 04:00:55.924155 | orchestrator | 2026-02-08 04:00:55.924163 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-08 04:00:55.924170 | orchestrator | 2026-02-08 04:00:55.924176 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-08 04:00:55.924182 | orchestrator | Sunday 08 February 2026 03:59:40 +0000 (0:00:00.301) 0:00:00.302 ******* 2026-02-08 04:00:55.924188 | orchestrator | changed: [testbed-manager] 2026-02-08 04:00:55.924195 | orchestrator | 2026-02-08 04:00:55.924201 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-08 04:00:55.924206 | orchestrator | Sunday 08 February 2026 03:59:41 +0000 (0:00:01.464) 0:00:01.766 ******* 2026-02-08 04:00:55.924212 | orchestrator | changed: [testbed-manager] 2026-02-08 04:00:55.924217 | orchestrator | 2026-02-08 04:00:55.924222 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-08 04:00:55.924247 | orchestrator | Sunday 08 February 2026 03:59:42 +0000 (0:00:01.058) 0:00:02.824 ******* 2026-02-08 04:00:55.924253 | orchestrator | changed: [testbed-manager] 2026-02-08 04:00:55.924258 | orchestrator | 2026-02-08 04:00:55.924264 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-08 04:00:55.924269 | orchestrator | Sunday 08 February 2026 03:59:43 +0000 (0:00:01.082) 0:00:03.907 ******* 2026-02-08 04:00:55.924274 | orchestrator | changed: [testbed-manager] 2026-02-08 04:00:55.924280 | orchestrator | 2026-02-08 04:00:55.924285 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-08 04:00:55.924291 | orchestrator | Sunday 08 February 2026 03:59:44 +0000 (0:00:01.208) 0:00:05.115 ******* 2026-02-08 04:00:55.924296 | orchestrator | changed: [testbed-manager] 2026-02-08 04:00:55.924301 | orchestrator | 2026-02-08 04:00:55.924307 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-08 04:00:55.924312 | orchestrator | Sunday 08 February 2026 03:59:46 +0000 (0:00:01.169) 0:00:06.284 ******* 2026-02-08 04:00:55.924318 | orchestrator | changed: [testbed-manager] 2026-02-08 04:00:55.924323 | orchestrator | 2026-02-08 04:00:55.924328 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-08 04:00:55.924334 | orchestrator | Sunday 08 February 2026 03:59:47 +0000 (0:00:01.091) 0:00:07.376 ******* 2026-02-08 04:00:55.924339 | orchestrator | changed: [testbed-manager] 2026-02-08 04:00:55.924345 | orchestrator | 2026-02-08 04:00:55.924350 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-08 04:00:55.924356 | orchestrator | Sunday 08 February 2026 03:59:49 +0000 (0:00:02.042) 0:00:09.418 ******* 2026-02-08 04:00:55.924361 | orchestrator | changed: [testbed-manager] 2026-02-08 04:00:55.924366 | orchestrator | 2026-02-08 04:00:55.924372 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-08 04:00:55.924388 | orchestrator | Sunday 08 February 2026 03:59:50 +0000 (0:00:01.256) 0:00:10.675 ******* 2026-02-08 04:00:55.924393 | orchestrator | changed: [testbed-manager] 2026-02-08 04:00:55.924399 | orchestrator | 2026-02-08 04:00:55.924404 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-08 04:00:55.924410 | orchestrator | Sunday 08 February 2026 04:00:31 +0000 (0:00:40.537) 0:00:51.212 ******* 2026-02-08 04:00:55.924415 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:00:55.924420 | orchestrator | 2026-02-08 04:00:55.924426 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-08 04:00:55.924431 | orchestrator | 2026-02-08 04:00:55.924437 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-08 04:00:55.924442 | orchestrator | Sunday 08 February 2026 04:00:31 +0000 (0:00:00.160) 0:00:51.373 ******* 2026-02-08 04:00:55.924447 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:00:55.924453 | orchestrator | 2026-02-08 04:00:55.924458 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-08 04:00:55.924463 | orchestrator | 2026-02-08 04:00:55.924469 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-08 04:00:55.924474 | orchestrator | Sunday 08 February 2026 04:00:43 +0000 (0:00:11.825) 0:01:03.199 ******* 2026-02-08 04:00:55.924479 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:00:55.924485 | orchestrator | 2026-02-08 04:00:55.924490 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-08 04:00:55.924495 | orchestrator | 2026-02-08 04:00:55.924602 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-08 04:00:55.924613 | orchestrator | Sunday 08 February 2026 04:00:54 +0000 (0:00:11.121) 0:01:14.321 ******* 2026-02-08 04:00:55.924620 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:00:55.924627 | orchestrator | 2026-02-08 04:00:55.924634 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:00:55.924641 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 04:00:55.924659 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:00:55.924666 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:00:55.924672 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:00:55.924679 | orchestrator | 2026-02-08 04:00:55.924685 | orchestrator | 2026-02-08 04:00:55.924692 | orchestrator | 2026-02-08 04:00:55.924698 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:00:55.924704 | orchestrator | Sunday 08 February 2026 04:00:55 +0000 (0:00:01.264) 0:01:15.585 ******* 2026-02-08 04:00:55.924711 | orchestrator | =============================================================================== 2026-02-08 04:00:55.924717 | orchestrator | Create admin user ------------------------------------------------------ 40.54s 2026-02-08 04:00:55.924737 | orchestrator | Restart ceph manager service ------------------------------------------- 24.21s 2026-02-08 04:00:55.924744 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.04s 2026-02-08 04:00:55.924751 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.46s 2026-02-08 04:00:55.924757 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.26s 2026-02-08 04:00:55.924763 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.21s 2026-02-08 04:00:55.924769 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.17s 2026-02-08 04:00:55.924776 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.09s 2026-02-08 04:00:55.924782 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.08s 2026-02-08 04:00:55.924788 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.06s 2026-02-08 04:00:55.924795 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-02-08 04:00:56.263666 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-08 04:00:58.425057 | orchestrator | 2026-02-08 04:00:58 | INFO  | Task cadaa5db-3018-4815-8d36-3e9161e8f3f1 (keystone) was prepared for execution. 2026-02-08 04:00:58.425134 | orchestrator | 2026-02-08 04:00:58 | INFO  | It takes a moment until task cadaa5db-3018-4815-8d36-3e9161e8f3f1 (keystone) has been started and output is visible here. 2026-02-08 04:01:06.321242 | orchestrator | 2026-02-08 04:01:06.321360 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:01:06.321371 | orchestrator | 2026-02-08 04:01:06.321379 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:01:06.321386 | orchestrator | Sunday 08 February 2026 04:01:02 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-02-08 04:01:06.321393 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:01:06.321400 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:01:06.321407 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:01:06.321413 | orchestrator | 2026-02-08 04:01:06.321419 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:01:06.321426 | orchestrator | Sunday 08 February 2026 04:01:03 +0000 (0:00:00.341) 0:00:00.618 ******* 2026-02-08 04:01:06.321433 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-08 04:01:06.321450 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-08 04:01:06.321456 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-08 04:01:06.321463 | orchestrator | 2026-02-08 04:01:06.321486 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-08 04:01:06.321502 | orchestrator | 2026-02-08 04:01:06.321536 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-08 04:01:06.321547 | orchestrator | Sunday 08 February 2026 04:01:03 +0000 (0:00:00.498) 0:00:01.116 ******* 2026-02-08 04:01:06.321583 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:01:06.321596 | orchestrator | 2026-02-08 04:01:06.321605 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-08 04:01:06.321612 | orchestrator | Sunday 08 February 2026 04:01:04 +0000 (0:00:00.682) 0:00:01.799 ******* 2026-02-08 04:01:06.321623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:06.321681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:06.321690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:01:06.321723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:06.321744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:01:06.321757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:06.321767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:01:06.321778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:06.321790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:06.321800 | orchestrator | 2026-02-08 04:01:06.321809 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-08 04:01:06.321826 | orchestrator | Sunday 08 February 2026 04:01:06 +0000 (0:00:01.802) 0:00:03.601 ******* 2026-02-08 04:01:12.202356 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:12.202497 | orchestrator | 2026-02-08 04:01:12.202621 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-08 04:01:12.202647 | orchestrator | Sunday 08 February 2026 04:01:06 +0000 (0:00:00.334) 0:00:03.936 ******* 2026-02-08 04:01:12.202666 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:12.202715 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:01:12.202727 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:01:12.202738 | orchestrator | 2026-02-08 04:01:12.202750 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-08 04:01:12.202761 | orchestrator | Sunday 08 February 2026 04:01:06 +0000 (0:00:00.335) 0:00:04.271 ******* 2026-02-08 04:01:12.202772 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:01:12.202782 | orchestrator | 2026-02-08 04:01:12.202793 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-08 04:01:12.202805 | orchestrator | Sunday 08 February 2026 04:01:07 +0000 (0:00:00.883) 0:00:05.155 ******* 2026-02-08 04:01:12.202831 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:01:12.202843 | orchestrator | 2026-02-08 04:01:12.202854 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-08 04:01:12.202865 | orchestrator | Sunday 08 February 2026 04:01:08 +0000 (0:00:00.609) 0:00:05.764 ******* 2026-02-08 04:01:12.202884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:12.202903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:12.202918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:12.202964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:01:12.202986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:01:12.202999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:01:12.203013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:12.203026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:12.203060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:12.203081 | orchestrator | 2026-02-08 04:01:12.203093 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-08 04:01:12.203104 | orchestrator | Sunday 08 February 2026 04:01:11 +0000 (0:00:03.117) 0:00:08.882 ******* 2026-02-08 04:01:12.203131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 04:01:13.041659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:13.041765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 04:01:13.041783 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:13.041800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 04:01:13.041814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:13.041852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 04:01:13.041930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 04:01:13.041953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:13.041974 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:01:13.041993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 04:01:13.042157 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:01:13.042190 | orchestrator | 2026-02-08 04:01:13.042213 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-08 04:01:13.042234 | orchestrator | Sunday 08 February 2026 04:01:12 +0000 (0:00:00.609) 0:00:09.491 ******* 2026-02-08 04:01:13.042255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 04:01:13.042292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:13.042341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 04:01:16.243238 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:16.243331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 04:01:16.243349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:16.243359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 04:01:16.243389 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:01:16.243400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 04:01:16.243423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:16.243449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 04:01:16.243459 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:01:16.243468 | orchestrator | 2026-02-08 04:01:16.243478 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-08 04:01:16.243488 | orchestrator | Sunday 08 February 2026 04:01:13 +0000 (0:00:00.841) 0:00:10.333 ******* 2026-02-08 04:01:16.243497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:16.243614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:16.243636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:16.243657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:01:21.012881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:01:21.012991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:01:21.013040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:21.013061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:21.013082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:21.013100 | orchestrator | 2026-02-08 04:01:21.013116 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-08 04:01:21.013127 | orchestrator | Sunday 08 February 2026 04:01:16 +0000 (0:00:03.198) 0:00:13.532 ******* 2026-02-08 04:01:21.013172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:21.013186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:21.013197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:21.013216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:21.013233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:21.013250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:24.663414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:24.663599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:24.663643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:01:24.663657 | orchestrator | 2026-02-08 04:01:24.663671 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-08 04:01:24.663684 | orchestrator | Sunday 08 February 2026 04:01:20 +0000 (0:00:04.768) 0:00:18.301 ******* 2026-02-08 04:01:24.663696 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:01:24.663727 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:01:24.663739 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:01:24.663749 | orchestrator | 2026-02-08 04:01:24.663761 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-08 04:01:24.663772 | orchestrator | Sunday 08 February 2026 04:01:22 +0000 (0:00:01.436) 0:00:19.738 ******* 2026-02-08 04:01:24.663783 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:24.663794 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:01:24.663805 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:01:24.663815 | orchestrator | 2026-02-08 04:01:24.663826 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-08 04:01:24.663837 | orchestrator | Sunday 08 February 2026 04:01:23 +0000 (0:00:00.794) 0:00:20.532 ******* 2026-02-08 04:01:24.663848 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:24.663859 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:01:24.663870 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:01:24.663880 | orchestrator | 2026-02-08 04:01:24.663904 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-08 04:01:24.663928 | orchestrator | Sunday 08 February 2026 04:01:23 +0000 (0:00:00.525) 0:00:21.058 ******* 2026-02-08 04:01:24.663942 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:24.663955 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:01:24.663968 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:01:24.663980 | orchestrator | 2026-02-08 04:01:24.663993 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-08 04:01:24.664006 | orchestrator | Sunday 08 February 2026 04:01:24 +0000 (0:00:00.309) 0:00:21.367 ******* 2026-02-08 04:01:24.664058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 04:01:24.664081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:24.664095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 04:01:24.664107 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:24.664120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 04:01:24.664131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:24.664148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 04:01:24.664160 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:01:24.664187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-08 04:01:43.642469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 04:01:43.642604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 04:01:43.642622 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:01:43.642632 | orchestrator | 2026-02-08 04:01:43.642641 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-08 04:01:43.642649 | orchestrator | Sunday 08 February 2026 04:01:24 +0000 (0:00:00.584) 0:00:21.952 ******* 2026-02-08 04:01:43.642657 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:43.642665 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:01:43.642672 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:01:43.642679 | orchestrator | 2026-02-08 04:01:43.642686 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-08 04:01:43.642694 | orchestrator | Sunday 08 February 2026 04:01:24 +0000 (0:00:00.320) 0:00:22.272 ******* 2026-02-08 04:01:43.642701 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-08 04:01:43.642709 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-08 04:01:43.642717 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-08 04:01:43.642724 | orchestrator | 2026-02-08 04:01:43.642731 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-08 04:01:43.642738 | orchestrator | Sunday 08 February 2026 04:01:26 +0000 (0:00:01.828) 0:00:24.101 ******* 2026-02-08 04:01:43.642746 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:01:43.642753 | orchestrator | 2026-02-08 04:01:43.642760 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-08 04:01:43.642768 | orchestrator | Sunday 08 February 2026 04:01:27 +0000 (0:00:00.950) 0:00:25.052 ******* 2026-02-08 04:01:43.642795 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:01:43.642802 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:01:43.642810 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:01:43.642817 | orchestrator | 2026-02-08 04:01:43.642824 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-08 04:01:43.642843 | orchestrator | Sunday 08 February 2026 04:01:28 +0000 (0:00:00.565) 0:00:25.617 ******* 2026-02-08 04:01:43.642851 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-08 04:01:43.642858 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:01:43.642865 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-08 04:01:43.642872 | orchestrator | 2026-02-08 04:01:43.642880 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-08 04:01:43.642888 | orchestrator | Sunday 08 February 2026 04:01:29 +0000 (0:00:01.132) 0:00:26.749 ******* 2026-02-08 04:01:43.642895 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:01:43.642903 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:01:43.642910 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:01:43.642917 | orchestrator | 2026-02-08 04:01:43.642925 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-08 04:01:43.642932 | orchestrator | Sunday 08 February 2026 04:01:29 +0000 (0:00:00.554) 0:00:27.303 ******* 2026-02-08 04:01:43.642939 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-08 04:01:43.642946 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-08 04:01:43.642954 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-08 04:01:43.642961 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-08 04:01:43.642968 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-08 04:01:43.642976 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-08 04:01:43.642983 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-08 04:01:43.642991 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-08 04:01:43.643012 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-08 04:01:43.643021 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-08 04:01:43.643030 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-08 04:01:43.643038 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-08 04:01:43.643047 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-08 04:01:43.643056 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-08 04:01:43.643064 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-08 04:01:43.643073 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-08 04:01:43.643081 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-08 04:01:43.643090 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-08 04:01:43.643098 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-08 04:01:43.643107 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-08 04:01:43.643116 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-08 04:01:43.643132 | orchestrator | 2026-02-08 04:01:43.643140 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-08 04:01:43.643149 | orchestrator | Sunday 08 February 2026 04:01:38 +0000 (0:00:08.841) 0:00:36.145 ******* 2026-02-08 04:01:43.643158 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-08 04:01:43.643166 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-08 04:01:43.643175 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-08 04:01:43.643184 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-08 04:01:43.643192 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-08 04:01:43.643200 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-08 04:01:43.643209 | orchestrator | 2026-02-08 04:01:43.643218 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-08 04:01:43.643226 | orchestrator | Sunday 08 February 2026 04:01:41 +0000 (0:00:02.577) 0:00:38.723 ******* 2026-02-08 04:01:43.643240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:01:43.643257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:03:22.384595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-08 04:03:22.384747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:03:22.384765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:03:22.384788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-08 04:03:22.384795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:03:22.384818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:03:22.384827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-08 04:03:22.384842 | orchestrator | 2026-02-08 04:03:22.384851 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-08 04:03:22.384860 | orchestrator | Sunday 08 February 2026 04:01:43 +0000 (0:00:02.209) 0:00:40.932 ******* 2026-02-08 04:03:22.384867 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:03:22.384888 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:03:22.384895 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:03:22.384902 | orchestrator | 2026-02-08 04:03:22.384910 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-08 04:03:22.384917 | orchestrator | Sunday 08 February 2026 04:01:44 +0000 (0:00:00.499) 0:00:41.431 ******* 2026-02-08 04:03:22.384924 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:03:22.384930 | orchestrator | 2026-02-08 04:03:22.384937 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-08 04:03:22.384943 | orchestrator | Sunday 08 February 2026 04:01:46 +0000 (0:00:02.364) 0:00:43.796 ******* 2026-02-08 04:03:22.384950 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:03:22.384957 | orchestrator | 2026-02-08 04:03:22.384964 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-08 04:03:22.384971 | orchestrator | Sunday 08 February 2026 04:01:48 +0000 (0:00:02.184) 0:00:45.981 ******* 2026-02-08 04:03:22.384977 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:03:22.384984 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:03:22.384992 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:03:22.384999 | orchestrator | 2026-02-08 04:03:22.385006 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-08 04:03:22.385013 | orchestrator | Sunday 08 February 2026 04:01:49 +0000 (0:00:00.839) 0:00:46.821 ******* 2026-02-08 04:03:22.385019 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:03:22.385024 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:03:22.385029 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:03:22.385034 | orchestrator | 2026-02-08 04:03:22.385039 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-08 04:03:22.385044 | orchestrator | Sunday 08 February 2026 04:01:49 +0000 (0:00:00.342) 0:00:47.164 ******* 2026-02-08 04:03:22.385049 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:03:22.385054 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:03:22.385060 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:03:22.385064 | orchestrator | 2026-02-08 04:03:22.385069 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-08 04:03:22.385075 | orchestrator | Sunday 08 February 2026 04:01:50 +0000 (0:00:00.586) 0:00:47.750 ******* 2026-02-08 04:03:22.385081 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:03:22.385089 | orchestrator | 2026-02-08 04:03:22.385094 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-08 04:03:22.385099 | orchestrator | Sunday 08 February 2026 04:02:04 +0000 (0:00:13.909) 0:01:01.660 ******* 2026-02-08 04:03:22.385109 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:03:22.385114 | orchestrator | 2026-02-08 04:03:22.385119 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-08 04:03:22.385124 | orchestrator | Sunday 08 February 2026 04:02:14 +0000 (0:00:10.545) 0:01:12.206 ******* 2026-02-08 04:03:22.385129 | orchestrator | 2026-02-08 04:03:22.385134 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-08 04:03:22.385139 | orchestrator | Sunday 08 February 2026 04:02:14 +0000 (0:00:00.077) 0:01:12.284 ******* 2026-02-08 04:03:22.385144 | orchestrator | 2026-02-08 04:03:22.385149 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-08 04:03:22.385154 | orchestrator | Sunday 08 February 2026 04:02:15 +0000 (0:00:00.077) 0:01:12.362 ******* 2026-02-08 04:03:22.385159 | orchestrator | 2026-02-08 04:03:22.385164 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-08 04:03:22.385174 | orchestrator | Sunday 08 February 2026 04:02:15 +0000 (0:00:00.073) 0:01:12.435 ******* 2026-02-08 04:03:22.385178 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:03:22.385182 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:03:22.385186 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:03:22.385191 | orchestrator | 2026-02-08 04:03:22.385195 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-08 04:03:22.385199 | orchestrator | Sunday 08 February 2026 04:02:59 +0000 (0:00:44.358) 0:01:56.794 ******* 2026-02-08 04:03:22.385203 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:03:22.385207 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:03:22.385212 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:03:22.385216 | orchestrator | 2026-02-08 04:03:22.385220 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-08 04:03:22.385224 | orchestrator | Sunday 08 February 2026 04:03:09 +0000 (0:00:09.930) 0:02:06.724 ******* 2026-02-08 04:03:22.385228 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:03:22.385232 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:03:22.385236 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:03:22.385241 | orchestrator | 2026-02-08 04:03:22.385245 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-08 04:03:22.385249 | orchestrator | Sunday 08 February 2026 04:03:21 +0000 (0:00:12.372) 0:02:19.097 ******* 2026-02-08 04:03:22.385259 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:04:10.750414 | orchestrator | 2026-02-08 04:04:10.750527 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-08 04:04:10.750543 | orchestrator | Sunday 08 February 2026 04:03:22 +0000 (0:00:00.579) 0:02:19.677 ******* 2026-02-08 04:04:10.750556 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:04:10.750568 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:04:10.750579 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:04:10.750590 | orchestrator | 2026-02-08 04:04:10.750601 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-08 04:04:10.750614 | orchestrator | Sunday 08 February 2026 04:03:23 +0000 (0:00:01.214) 0:02:20.891 ******* 2026-02-08 04:04:10.750626 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:04:10.750638 | orchestrator | 2026-02-08 04:04:10.750649 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-08 04:04:10.750661 | orchestrator | Sunday 08 February 2026 04:03:25 +0000 (0:00:01.863) 0:02:22.755 ******* 2026-02-08 04:04:10.750672 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-08 04:04:10.750683 | orchestrator | 2026-02-08 04:04:10.750693 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-08 04:04:10.750705 | orchestrator | Sunday 08 February 2026 04:03:36 +0000 (0:00:10.908) 0:02:33.663 ******* 2026-02-08 04:04:10.750716 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-08 04:04:10.750727 | orchestrator | 2026-02-08 04:04:10.750738 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-08 04:04:10.750749 | orchestrator | Sunday 08 February 2026 04:03:59 +0000 (0:00:22.792) 0:02:56.456 ******* 2026-02-08 04:04:10.750760 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-08 04:04:10.750772 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-08 04:04:10.750783 | orchestrator | 2026-02-08 04:04:10.750794 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-08 04:04:10.750805 | orchestrator | Sunday 08 February 2026 04:04:05 +0000 (0:00:06.500) 0:03:02.956 ******* 2026-02-08 04:04:10.750815 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:10.750826 | orchestrator | 2026-02-08 04:04:10.750837 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-08 04:04:10.750848 | orchestrator | Sunday 08 February 2026 04:04:05 +0000 (0:00:00.139) 0:03:03.095 ******* 2026-02-08 04:04:10.750887 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:10.750901 | orchestrator | 2026-02-08 04:04:10.750915 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-08 04:04:10.750927 | orchestrator | Sunday 08 February 2026 04:04:05 +0000 (0:00:00.129) 0:03:03.225 ******* 2026-02-08 04:04:10.750940 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:10.750954 | orchestrator | 2026-02-08 04:04:10.750967 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-08 04:04:10.750980 | orchestrator | Sunday 08 February 2026 04:04:06 +0000 (0:00:00.146) 0:03:03.372 ******* 2026-02-08 04:04:10.750992 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:10.751006 | orchestrator | 2026-02-08 04:04:10.751019 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-08 04:04:10.751032 | orchestrator | Sunday 08 February 2026 04:04:06 +0000 (0:00:00.599) 0:03:03.971 ******* 2026-02-08 04:04:10.751045 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:04:10.751058 | orchestrator | 2026-02-08 04:04:10.751072 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-08 04:04:10.751086 | orchestrator | Sunday 08 February 2026 04:04:09 +0000 (0:00:03.153) 0:03:07.125 ******* 2026-02-08 04:04:10.751113 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:10.751125 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:04:10.751136 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:04:10.751146 | orchestrator | 2026-02-08 04:04:10.751157 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:04:10.751217 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 04:04:10.751244 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-08 04:04:10.751255 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-08 04:04:10.751266 | orchestrator | 2026-02-08 04:04:10.751277 | orchestrator | 2026-02-08 04:04:10.751288 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:04:10.751299 | orchestrator | Sunday 08 February 2026 04:04:10 +0000 (0:00:00.506) 0:03:07.632 ******* 2026-02-08 04:04:10.751310 | orchestrator | =============================================================================== 2026-02-08 04:04:10.751320 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 44.36s 2026-02-08 04:04:10.751331 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.79s 2026-02-08 04:04:10.751369 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.91s 2026-02-08 04:04:10.751381 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.37s 2026-02-08 04:04:10.751392 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.91s 2026-02-08 04:04:10.751403 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.55s 2026-02-08 04:04:10.751413 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.93s 2026-02-08 04:04:10.751424 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.84s 2026-02-08 04:04:10.751435 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.50s 2026-02-08 04:04:10.751465 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.77s 2026-02-08 04:04:10.751477 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.20s 2026-02-08 04:04:10.751487 | orchestrator | keystone : Creating default user role ----------------------------------- 3.15s 2026-02-08 04:04:10.751498 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.12s 2026-02-08 04:04:10.751509 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.58s 2026-02-08 04:04:10.751528 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.36s 2026-02-08 04:04:10.751539 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.21s 2026-02-08 04:04:10.751550 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.18s 2026-02-08 04:04:10.751561 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.86s 2026-02-08 04:04:10.751571 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.83s 2026-02-08 04:04:10.751582 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.80s 2026-02-08 04:04:13.100937 | orchestrator | 2026-02-08 04:04:13 | INFO  | Task aae4c13d-e606-4858-8c60-afb4059b8580 (placement) was prepared for execution. 2026-02-08 04:04:13.101040 | orchestrator | 2026-02-08 04:04:13 | INFO  | It takes a moment until task aae4c13d-e606-4858-8c60-afb4059b8580 (placement) has been started and output is visible here. 2026-02-08 04:04:48.068852 | orchestrator | 2026-02-08 04:04:48.068968 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:04:48.068986 | orchestrator | 2026-02-08 04:04:48.068998 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:04:48.069010 | orchestrator | Sunday 08 February 2026 04:04:17 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-08 04:04:48.069021 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:04:48.069033 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:04:48.069044 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:04:48.069055 | orchestrator | 2026-02-08 04:04:48.069066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:04:48.069077 | orchestrator | Sunday 08 February 2026 04:04:17 +0000 (0:00:00.316) 0:00:00.597 ******* 2026-02-08 04:04:48.069089 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-08 04:04:48.069101 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-08 04:04:48.069112 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-08 04:04:48.069123 | orchestrator | 2026-02-08 04:04:48.069133 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-08 04:04:48.069144 | orchestrator | 2026-02-08 04:04:48.069155 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-08 04:04:48.069166 | orchestrator | Sunday 08 February 2026 04:04:18 +0000 (0:00:00.533) 0:00:01.130 ******* 2026-02-08 04:04:48.069177 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:04:48.069189 | orchestrator | 2026-02-08 04:04:48.069199 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-08 04:04:48.069210 | orchestrator | Sunday 08 February 2026 04:04:18 +0000 (0:00:00.563) 0:00:01.693 ******* 2026-02-08 04:04:48.069238 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-08 04:04:48.069250 | orchestrator | 2026-02-08 04:04:48.069261 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-08 04:04:48.069272 | orchestrator | Sunday 08 February 2026 04:04:22 +0000 (0:00:03.810) 0:00:05.504 ******* 2026-02-08 04:04:48.069283 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-08 04:04:48.069294 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-08 04:04:48.069341 | orchestrator | 2026-02-08 04:04:48.069361 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-08 04:04:48.069382 | orchestrator | Sunday 08 February 2026 04:04:28 +0000 (0:00:06.292) 0:00:11.797 ******* 2026-02-08 04:04:48.069402 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-08 04:04:48.069417 | orchestrator | 2026-02-08 04:04:48.069430 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-08 04:04:48.069443 | orchestrator | Sunday 08 February 2026 04:04:32 +0000 (0:00:03.612) 0:00:15.409 ******* 2026-02-08 04:04:48.069481 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:04:48.069496 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-08 04:04:48.069509 | orchestrator | 2026-02-08 04:04:48.069521 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-08 04:04:48.069536 | orchestrator | Sunday 08 February 2026 04:04:36 +0000 (0:00:04.043) 0:00:19.453 ******* 2026-02-08 04:04:48.069549 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:04:48.069574 | orchestrator | 2026-02-08 04:04:48.069589 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-08 04:04:48.069603 | orchestrator | Sunday 08 February 2026 04:04:39 +0000 (0:00:03.079) 0:00:22.532 ******* 2026-02-08 04:04:48.069617 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-08 04:04:48.069631 | orchestrator | 2026-02-08 04:04:48.069643 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-08 04:04:48.069657 | orchestrator | Sunday 08 February 2026 04:04:43 +0000 (0:00:04.032) 0:00:26.565 ******* 2026-02-08 04:04:48.069671 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:48.069684 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:04:48.069697 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:04:48.069710 | orchestrator | 2026-02-08 04:04:48.069723 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-08 04:04:48.069737 | orchestrator | Sunday 08 February 2026 04:04:44 +0000 (0:00:00.311) 0:00:26.876 ******* 2026-02-08 04:04:48.069753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:48.069793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:48.069833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:48.069867 | orchestrator | 2026-02-08 04:04:48.069886 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-08 04:04:48.069903 | orchestrator | Sunday 08 February 2026 04:04:45 +0000 (0:00:01.101) 0:00:27.978 ******* 2026-02-08 04:04:48.069922 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:48.069940 | orchestrator | 2026-02-08 04:04:48.069957 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-08 04:04:48.069975 | orchestrator | Sunday 08 February 2026 04:04:45 +0000 (0:00:00.372) 0:00:28.351 ******* 2026-02-08 04:04:48.069995 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:48.070088 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:04:48.070103 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:04:48.070114 | orchestrator | 2026-02-08 04:04:48.070125 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-08 04:04:48.070136 | orchestrator | Sunday 08 February 2026 04:04:45 +0000 (0:00:00.330) 0:00:28.681 ******* 2026-02-08 04:04:48.070147 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:04:48.070158 | orchestrator | 2026-02-08 04:04:48.070169 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-08 04:04:48.070180 | orchestrator | Sunday 08 February 2026 04:04:46 +0000 (0:00:00.565) 0:00:29.247 ******* 2026-02-08 04:04:48.070192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:48.070217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:51.081611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:51.081736 | orchestrator | 2026-02-08 04:04:51.081752 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-08 04:04:51.081763 | orchestrator | Sunday 08 February 2026 04:04:48 +0000 (0:00:01.622) 0:00:30.869 ******* 2026-02-08 04:04:51.081775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 04:04:51.081785 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:51.081796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 04:04:51.081807 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:04:51.081816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 04:04:51.081827 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:04:51.081837 | orchestrator | 2026-02-08 04:04:51.081847 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-08 04:04:51.081876 | orchestrator | Sunday 08 February 2026 04:04:48 +0000 (0:00:00.542) 0:00:31.412 ******* 2026-02-08 04:04:51.082118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 04:04:51.082148 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:51.082160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 04:04:51.082173 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:04:51.082189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 04:04:51.082205 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:04:51.082218 | orchestrator | 2026-02-08 04:04:51.082230 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-08 04:04:51.082242 | orchestrator | Sunday 08 February 2026 04:04:49 +0000 (0:00:00.784) 0:00:32.197 ******* 2026-02-08 04:04:51.082254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:51.082286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:58.606506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:58.606650 | orchestrator | 2026-02-08 04:04:58.606703 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-08 04:04:58.606720 | orchestrator | Sunday 08 February 2026 04:04:51 +0000 (0:00:01.685) 0:00:33.882 ******* 2026-02-08 04:04:58.606734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:58.606747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:58.606786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:04:58.606798 | orchestrator | 2026-02-08 04:04:58.606809 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-08 04:04:58.606821 | orchestrator | Sunday 08 February 2026 04:04:53 +0000 (0:00:02.770) 0:00:36.652 ******* 2026-02-08 04:04:58.606862 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-08 04:04:58.606883 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-08 04:04:58.606895 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-08 04:04:58.606906 | orchestrator | 2026-02-08 04:04:58.606917 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-08 04:04:58.606928 | orchestrator | Sunday 08 February 2026 04:04:55 +0000 (0:00:01.544) 0:00:38.197 ******* 2026-02-08 04:04:58.606940 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:04:58.606952 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:04:58.606963 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:04:58.606974 | orchestrator | 2026-02-08 04:04:58.606985 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-08 04:04:58.606996 | orchestrator | Sunday 08 February 2026 04:04:56 +0000 (0:00:01.319) 0:00:39.516 ******* 2026-02-08 04:04:58.607010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 04:04:58.607024 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:04:58.607038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 04:04:58.607060 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:04:58.607075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-08 04:04:58.607088 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:04:58.607100 | orchestrator | 2026-02-08 04:04:58.607113 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-08 04:04:58.607126 | orchestrator | Sunday 08 February 2026 04:04:57 +0000 (0:00:00.784) 0:00:40.301 ******* 2026-02-08 04:04:58.607155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:05:26.893785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:05:26.893902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-08 04:05:26.893945 | orchestrator | 2026-02-08 04:05:26.893960 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-08 04:05:26.893973 | orchestrator | Sunday 08 February 2026 04:04:58 +0000 (0:00:01.115) 0:00:41.416 ******* 2026-02-08 04:05:26.893984 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:05:26.893996 | orchestrator | 2026-02-08 04:05:26.894007 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-08 04:05:26.894151 | orchestrator | Sunday 08 February 2026 04:05:00 +0000 (0:00:02.015) 0:00:43.432 ******* 2026-02-08 04:05:26.894174 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:05:26.894193 | orchestrator | 2026-02-08 04:05:26.894210 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-08 04:05:26.894228 | orchestrator | Sunday 08 February 2026 04:05:02 +0000 (0:00:02.103) 0:00:45.536 ******* 2026-02-08 04:05:26.894247 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:05:26.894294 | orchestrator | 2026-02-08 04:05:26.894314 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-08 04:05:26.894332 | orchestrator | Sunday 08 February 2026 04:05:16 +0000 (0:00:13.325) 0:00:58.862 ******* 2026-02-08 04:05:26.894352 | orchestrator | 2026-02-08 04:05:26.894371 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-08 04:05:26.894389 | orchestrator | Sunday 08 February 2026 04:05:16 +0000 (0:00:00.070) 0:00:58.932 ******* 2026-02-08 04:05:26.894406 | orchestrator | 2026-02-08 04:05:26.894424 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-08 04:05:26.894442 | orchestrator | Sunday 08 February 2026 04:05:16 +0000 (0:00:00.069) 0:00:59.002 ******* 2026-02-08 04:05:26.894459 | orchestrator | 2026-02-08 04:05:26.894478 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-08 04:05:26.894497 | orchestrator | Sunday 08 February 2026 04:05:16 +0000 (0:00:00.071) 0:00:59.073 ******* 2026-02-08 04:05:26.894514 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:05:26.894531 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:05:26.894547 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:05:26.894564 | orchestrator | 2026-02-08 04:05:26.894583 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:05:26.894604 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 04:05:26.894622 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:05:26.894659 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:05:26.894678 | orchestrator | 2026-02-08 04:05:26.894697 | orchestrator | 2026-02-08 04:05:26.894713 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:05:26.894729 | orchestrator | Sunday 08 February 2026 04:05:26 +0000 (0:00:10.227) 0:01:09.300 ******* 2026-02-08 04:05:26.894745 | orchestrator | =============================================================================== 2026-02-08 04:05:26.894763 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.33s 2026-02-08 04:05:26.894806 | orchestrator | placement : Restart placement-api container ---------------------------- 10.23s 2026-02-08 04:05:26.894825 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.29s 2026-02-08 04:05:26.894843 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.04s 2026-02-08 04:05:26.894861 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.03s 2026-02-08 04:05:26.894899 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.81s 2026-02-08 04:05:26.894918 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.61s 2026-02-08 04:05:26.894937 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.08s 2026-02-08 04:05:26.894955 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.77s 2026-02-08 04:05:26.894974 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.10s 2026-02-08 04:05:26.894992 | orchestrator | placement : Creating placement databases -------------------------------- 2.02s 2026-02-08 04:05:26.895011 | orchestrator | placement : Copying over config.json files for services ----------------- 1.69s 2026-02-08 04:05:26.895028 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.62s 2026-02-08 04:05:26.895047 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.54s 2026-02-08 04:05:26.895066 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.32s 2026-02-08 04:05:26.895085 | orchestrator | placement : Check placement containers ---------------------------------- 1.12s 2026-02-08 04:05:26.895104 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.10s 2026-02-08 04:05:26.895122 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.78s 2026-02-08 04:05:26.895141 | orchestrator | placement : Copying over existing policy file --------------------------- 0.78s 2026-02-08 04:05:26.895156 | orchestrator | placement : include_tasks ----------------------------------------------- 0.57s 2026-02-08 04:05:29.506309 | orchestrator | 2026-02-08 04:05:29 | INFO  | Task 13fdcaf5-5ed9-463f-a4c8-f42054d4c55e (neutron) was prepared for execution. 2026-02-08 04:05:29.506395 | orchestrator | 2026-02-08 04:05:29 | INFO  | It takes a moment until task 13fdcaf5-5ed9-463f-a4c8-f42054d4c55e (neutron) has been started and output is visible here. 2026-02-08 04:06:18.627574 | orchestrator | 2026-02-08 04:06:18.627696 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:06:18.627716 | orchestrator | 2026-02-08 04:06:18.627729 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:06:18.627742 | orchestrator | Sunday 08 February 2026 04:05:33 +0000 (0:00:00.268) 0:00:00.268 ******* 2026-02-08 04:06:18.627754 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:06:18.627766 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:06:18.627776 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:06:18.627787 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:06:18.627798 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:06:18.627808 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:06:18.627819 | orchestrator | 2026-02-08 04:06:18.627831 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:06:18.627842 | orchestrator | Sunday 08 February 2026 04:05:34 +0000 (0:00:00.808) 0:00:01.076 ******* 2026-02-08 04:06:18.627853 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-08 04:06:18.627864 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-08 04:06:18.627875 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-08 04:06:18.627886 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-08 04:06:18.627897 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-08 04:06:18.627907 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-08 04:06:18.627918 | orchestrator | 2026-02-08 04:06:18.627929 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-08 04:06:18.627940 | orchestrator | 2026-02-08 04:06:18.627970 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-08 04:06:18.627993 | orchestrator | Sunday 08 February 2026 04:05:35 +0000 (0:00:00.749) 0:00:01.825 ******* 2026-02-08 04:06:18.628006 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:06:18.628045 | orchestrator | 2026-02-08 04:06:18.628057 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-08 04:06:18.628068 | orchestrator | Sunday 08 February 2026 04:05:36 +0000 (0:00:01.324) 0:00:03.150 ******* 2026-02-08 04:06:18.628079 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:06:18.628090 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:06:18.628103 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:06:18.628116 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:06:18.628128 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:06:18.628141 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:06:18.628154 | orchestrator | 2026-02-08 04:06:18.628166 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-08 04:06:18.628193 | orchestrator | Sunday 08 February 2026 04:05:38 +0000 (0:00:01.330) 0:00:04.481 ******* 2026-02-08 04:06:18.628207 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:06:18.628220 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:06:18.628265 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:06:18.628284 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:06:18.628303 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:06:18.628324 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:06:18.628344 | orchestrator | 2026-02-08 04:06:18.628362 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-08 04:06:18.628373 | orchestrator | Sunday 08 February 2026 04:05:39 +0000 (0:00:01.116) 0:00:05.598 ******* 2026-02-08 04:06:18.628384 | orchestrator | ok: [testbed-node-0] => { 2026-02-08 04:06:18.628396 | orchestrator |  "changed": false, 2026-02-08 04:06:18.628407 | orchestrator |  "msg": "All assertions passed" 2026-02-08 04:06:18.628418 | orchestrator | } 2026-02-08 04:06:18.628429 | orchestrator | ok: [testbed-node-1] => { 2026-02-08 04:06:18.628439 | orchestrator |  "changed": false, 2026-02-08 04:06:18.628450 | orchestrator |  "msg": "All assertions passed" 2026-02-08 04:06:18.628461 | orchestrator | } 2026-02-08 04:06:18.628471 | orchestrator | ok: [testbed-node-2] => { 2026-02-08 04:06:18.628482 | orchestrator |  "changed": false, 2026-02-08 04:06:18.628492 | orchestrator |  "msg": "All assertions passed" 2026-02-08 04:06:18.628503 | orchestrator | } 2026-02-08 04:06:18.628514 | orchestrator | ok: [testbed-node-3] => { 2026-02-08 04:06:18.628524 | orchestrator |  "changed": false, 2026-02-08 04:06:18.628535 | orchestrator |  "msg": "All assertions passed" 2026-02-08 04:06:18.628546 | orchestrator | } 2026-02-08 04:06:18.628557 | orchestrator | ok: [testbed-node-4] => { 2026-02-08 04:06:18.628568 | orchestrator |  "changed": false, 2026-02-08 04:06:18.628578 | orchestrator |  "msg": "All assertions passed" 2026-02-08 04:06:18.628589 | orchestrator | } 2026-02-08 04:06:18.628600 | orchestrator | ok: [testbed-node-5] => { 2026-02-08 04:06:18.628610 | orchestrator |  "changed": false, 2026-02-08 04:06:18.628622 | orchestrator |  "msg": "All assertions passed" 2026-02-08 04:06:18.628633 | orchestrator | } 2026-02-08 04:06:18.628644 | orchestrator | 2026-02-08 04:06:18.628655 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-08 04:06:18.628671 | orchestrator | Sunday 08 February 2026 04:05:40 +0000 (0:00:00.865) 0:00:06.463 ******* 2026-02-08 04:06:18.628689 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:18.628706 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:18.628723 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:18.628740 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:18.628758 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:18.628775 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:18.628792 | orchestrator | 2026-02-08 04:06:18.628809 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-08 04:06:18.628828 | orchestrator | Sunday 08 February 2026 04:05:40 +0000 (0:00:00.659) 0:00:07.122 ******* 2026-02-08 04:06:18.628847 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-08 04:06:18.628865 | orchestrator | 2026-02-08 04:06:18.628881 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-08 04:06:18.628903 | orchestrator | Sunday 08 February 2026 04:05:44 +0000 (0:00:03.803) 0:00:10.925 ******* 2026-02-08 04:06:18.628915 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-08 04:06:18.628926 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-08 04:06:18.628937 | orchestrator | 2026-02-08 04:06:18.628973 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-08 04:06:18.628993 | orchestrator | Sunday 08 February 2026 04:05:51 +0000 (0:00:06.533) 0:00:17.458 ******* 2026-02-08 04:06:18.629010 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:06:18.629028 | orchestrator | 2026-02-08 04:06:18.629046 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-08 04:06:18.629066 | orchestrator | Sunday 08 February 2026 04:05:54 +0000 (0:00:03.169) 0:00:20.628 ******* 2026-02-08 04:06:18.629085 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:06:18.629105 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-08 04:06:18.629119 | orchestrator | 2026-02-08 04:06:18.629130 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-08 04:06:18.629141 | orchestrator | Sunday 08 February 2026 04:05:58 +0000 (0:00:03.693) 0:00:24.321 ******* 2026-02-08 04:06:18.629152 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:06:18.629163 | orchestrator | 2026-02-08 04:06:18.629173 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-08 04:06:18.629184 | orchestrator | Sunday 08 February 2026 04:06:00 +0000 (0:00:02.814) 0:00:27.136 ******* 2026-02-08 04:06:18.629195 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-08 04:06:18.629206 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-08 04:06:18.629217 | orchestrator | 2026-02-08 04:06:18.629255 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-08 04:06:18.629267 | orchestrator | Sunday 08 February 2026 04:06:08 +0000 (0:00:08.019) 0:00:35.156 ******* 2026-02-08 04:06:18.629278 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:18.629289 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:18.629300 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:18.629311 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:18.629322 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:18.629333 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:18.629344 | orchestrator | 2026-02-08 04:06:18.629355 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-08 04:06:18.629366 | orchestrator | Sunday 08 February 2026 04:06:09 +0000 (0:00:00.855) 0:00:36.011 ******* 2026-02-08 04:06:18.629377 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:18.629388 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:18.629399 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:18.629410 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:18.629421 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:18.629431 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:18.629442 | orchestrator | 2026-02-08 04:06:18.629453 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-08 04:06:18.629473 | orchestrator | Sunday 08 February 2026 04:06:11 +0000 (0:00:02.250) 0:00:38.262 ******* 2026-02-08 04:06:18.629485 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:06:18.629496 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:06:18.629506 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:06:18.629517 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:06:18.629528 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:06:18.629539 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:06:18.629549 | orchestrator | 2026-02-08 04:06:18.629560 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-08 04:06:18.629572 | orchestrator | Sunday 08 February 2026 04:06:13 +0000 (0:00:01.208) 0:00:39.470 ******* 2026-02-08 04:06:18.629592 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:18.629603 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:18.629614 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:18.629625 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:18.629636 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:18.629647 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:18.629657 | orchestrator | 2026-02-08 04:06:18.629668 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-08 04:06:18.629679 | orchestrator | Sunday 08 February 2026 04:06:15 +0000 (0:00:02.651) 0:00:42.122 ******* 2026-02-08 04:06:18.629694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:18.629723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:24.062497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:24.062655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:24.062716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:24.062737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:24.062749 | orchestrator | 2026-02-08 04:06:24.062769 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-08 04:06:24.062792 | orchestrator | Sunday 08 February 2026 04:06:18 +0000 (0:00:02.787) 0:00:44.910 ******* 2026-02-08 04:06:24.062811 | orchestrator | [WARNING]: Skipped 2026-02-08 04:06:24.062828 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-08 04:06:24.062840 | orchestrator | due to this access issue: 2026-02-08 04:06:24.062858 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-08 04:06:24.062877 | orchestrator | a directory 2026-02-08 04:06:24.062897 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:06:24.062917 | orchestrator | 2026-02-08 04:06:24.062935 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-08 04:06:24.062949 | orchestrator | Sunday 08 February 2026 04:06:19 +0000 (0:00:00.870) 0:00:45.780 ******* 2026-02-08 04:06:24.062969 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:06:24.062991 | orchestrator | 2026-02-08 04:06:24.063012 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-08 04:06:24.063051 | orchestrator | Sunday 08 February 2026 04:06:20 +0000 (0:00:01.341) 0:00:47.121 ******* 2026-02-08 04:06:24.063075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:24.063116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:24.063136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:24.063158 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:24.063192 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:28.964663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:28.964813 | orchestrator | 2026-02-08 04:06:28.964841 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-08 04:06:28.964856 | orchestrator | Sunday 08 February 2026 04:06:24 +0000 (0:00:03.221) 0:00:50.343 ******* 2026-02-08 04:06:28.964885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:28.964898 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:28.964911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:28.964923 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:28.964934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:28.964945 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:28.964976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:28.964996 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:28.965007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:28.965018 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:28.965034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:28.965046 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:28.965057 | orchestrator | 2026-02-08 04:06:28.965068 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-08 04:06:28.965080 | orchestrator | Sunday 08 February 2026 04:06:26 +0000 (0:00:01.979) 0:00:52.322 ******* 2026-02-08 04:06:28.965091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:28.965103 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:28.965121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:34.551405 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:34.551562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:34.551598 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:34.551642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:34.551665 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:34.551684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:34.551705 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:34.551724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:34.551743 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:34.551763 | orchestrator | 2026-02-08 04:06:34.551782 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-08 04:06:34.551803 | orchestrator | Sunday 08 February 2026 04:06:28 +0000 (0:00:02.924) 0:00:55.246 ******* 2026-02-08 04:06:34.551822 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:34.551843 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:34.551900 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:34.551920 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:34.551938 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:34.551957 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:34.551976 | orchestrator | 2026-02-08 04:06:34.551995 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-08 04:06:34.552014 | orchestrator | Sunday 08 February 2026 04:06:31 +0000 (0:00:02.383) 0:00:57.630 ******* 2026-02-08 04:06:34.552033 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:34.552052 | orchestrator | 2026-02-08 04:06:34.552072 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-08 04:06:34.552120 | orchestrator | Sunday 08 February 2026 04:06:31 +0000 (0:00:00.147) 0:00:57.778 ******* 2026-02-08 04:06:34.552141 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:34.552159 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:34.552177 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:34.552196 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:34.552245 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:34.552262 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:34.552281 | orchestrator | 2026-02-08 04:06:34.552298 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-08 04:06:34.552309 | orchestrator | Sunday 08 February 2026 04:06:32 +0000 (0:00:00.627) 0:00:58.405 ******* 2026-02-08 04:06:34.552332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:34.552344 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:34.552356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:34.552367 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:34.552378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:34.552402 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:34.552414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:34.552425 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:34.552448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:43.341781 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:43.341886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:43.341899 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:43.341907 | orchestrator | 2026-02-08 04:06:43.341915 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-08 04:06:43.341924 | orchestrator | Sunday 08 February 2026 04:06:34 +0000 (0:00:02.425) 0:01:00.830 ******* 2026-02-08 04:06:43.341932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:43.341964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:43.341971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:43.341994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:43.342007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:43.342064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:43.342080 | orchestrator | 2026-02-08 04:06:43.342088 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-08 04:06:43.342096 | orchestrator | Sunday 08 February 2026 04:06:37 +0000 (0:00:03.191) 0:01:04.022 ******* 2026-02-08 04:06:43.342103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:43.342111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:43.342130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:06:48.490464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:48.490552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:48.490584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:06:48.490593 | orchestrator | 2026-02-08 04:06:48.490601 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-08 04:06:48.490610 | orchestrator | Sunday 08 February 2026 04:06:43 +0000 (0:00:05.602) 0:01:09.625 ******* 2026-02-08 04:06:48.490618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:48.490626 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:06:48.490661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:48.490669 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:06:48.490676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:06:48.490689 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:06:48.490696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:48.490703 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:48.490710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:48.490717 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:48.490723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:06:48.490730 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:48.490737 | orchestrator | 2026-02-08 04:06:48.490744 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-08 04:06:48.490751 | orchestrator | Sunday 08 February 2026 04:06:45 +0000 (0:00:02.298) 0:01:11.923 ******* 2026-02-08 04:06:48.490758 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:06:48.490765 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:06:48.490771 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:06:48.490778 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:06:48.490789 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:06:48.490796 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:06:48.490803 | orchestrator | 2026-02-08 04:06:48.490810 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-08 04:06:48.490826 | orchestrator | Sunday 08 February 2026 04:06:48 +0000 (0:00:02.847) 0:01:14.771 ******* 2026-02-08 04:07:08.488905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:08.489045 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:08.489063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:08.489074 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:08.489084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:08.489093 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:08.489110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:07:08.489154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:07:08.489331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:07:08.489366 | orchestrator | 2026-02-08 04:07:08.489376 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-08 04:07:08.489387 | orchestrator | Sunday 08 February 2026 04:06:52 +0000 (0:00:03.530) 0:01:18.301 ******* 2026-02-08 04:07:08.489396 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:08.489405 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:08.489414 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:08.489423 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:08.489432 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:08.489442 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:08.489453 | orchestrator | 2026-02-08 04:07:08.489487 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-08 04:07:08.489498 | orchestrator | Sunday 08 February 2026 04:06:54 +0000 (0:00:02.358) 0:01:20.660 ******* 2026-02-08 04:07:08.489520 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:08.489539 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:08.489550 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:08.489560 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:08.489571 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:08.489581 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:08.489592 | orchestrator | 2026-02-08 04:07:08.489602 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-08 04:07:08.489613 | orchestrator | Sunday 08 February 2026 04:06:56 +0000 (0:00:02.307) 0:01:22.968 ******* 2026-02-08 04:07:08.489624 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:08.489634 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:08.489644 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:08.489655 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:08.489665 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:08.489675 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:08.489686 | orchestrator | 2026-02-08 04:07:08.489697 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-08 04:07:08.489709 | orchestrator | Sunday 08 February 2026 04:06:58 +0000 (0:00:02.259) 0:01:25.228 ******* 2026-02-08 04:07:08.489720 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:08.489731 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:08.489741 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:08.489751 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:08.489761 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:08.489772 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:08.489783 | orchestrator | 2026-02-08 04:07:08.489794 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-08 04:07:08.489818 | orchestrator | Sunday 08 February 2026 04:07:01 +0000 (0:00:02.324) 0:01:27.552 ******* 2026-02-08 04:07:08.489827 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:08.489836 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:08.489844 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:08.489853 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:08.489862 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:08.489871 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:08.489879 | orchestrator | 2026-02-08 04:07:08.489888 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-08 04:07:08.489897 | orchestrator | Sunday 08 February 2026 04:07:03 +0000 (0:00:02.311) 0:01:29.863 ******* 2026-02-08 04:07:08.489907 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:08.489915 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:08.489924 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:08.489933 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:08.489942 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:08.489950 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:08.489959 | orchestrator | 2026-02-08 04:07:08.489968 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-08 04:07:08.489977 | orchestrator | Sunday 08 February 2026 04:07:06 +0000 (0:00:02.605) 0:01:32.469 ******* 2026-02-08 04:07:08.489986 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-08 04:07:08.489995 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:08.490004 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-08 04:07:08.490064 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:08.490083 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-08 04:07:08.490092 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:08.490101 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-08 04:07:08.490123 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-08 04:07:12.735415 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:12.735509 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:12.735522 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-08 04:07:12.735532 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:12.735540 | orchestrator | 2026-02-08 04:07:12.735550 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-08 04:07:12.735558 | orchestrator | Sunday 08 February 2026 04:07:08 +0000 (0:00:02.293) 0:01:34.762 ******* 2026-02-08 04:07:12.735569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:07:12.735580 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:12.735589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:07:12.735619 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:12.735628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:07:12.735636 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:12.735657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:12.735685 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:12.735712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:12.735721 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:12.735729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:12.735744 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:12.735752 | orchestrator | 2026-02-08 04:07:12.735760 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-08 04:07:12.735768 | orchestrator | Sunday 08 February 2026 04:07:10 +0000 (0:00:02.100) 0:01:36.862 ******* 2026-02-08 04:07:12.735776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:07:12.735785 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:12.735793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:07:12.735801 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:12.735822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:40.548334 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.548444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:07:40.548483 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.548493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:40.548503 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.548512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:40.548520 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.548528 | orchestrator | 2026-02-08 04:07:40.548536 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-08 04:07:40.548546 | orchestrator | Sunday 08 February 2026 04:07:12 +0000 (0:00:02.153) 0:01:39.016 ******* 2026-02-08 04:07:40.548554 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.548562 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.548570 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.548578 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.548586 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.548594 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.548602 | orchestrator | 2026-02-08 04:07:40.548610 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-08 04:07:40.548619 | orchestrator | Sunday 08 February 2026 04:07:15 +0000 (0:00:02.352) 0:01:41.369 ******* 2026-02-08 04:07:40.548627 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.548635 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.548642 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.548650 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:07:40.548658 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:07:40.548666 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:07:40.548674 | orchestrator | 2026-02-08 04:07:40.548683 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-08 04:07:40.548692 | orchestrator | Sunday 08 February 2026 04:07:18 +0000 (0:00:03.811) 0:01:45.181 ******* 2026-02-08 04:07:40.548715 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.548725 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.548734 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.548743 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.548752 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.548762 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.548771 | orchestrator | 2026-02-08 04:07:40.548780 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-08 04:07:40.548798 | orchestrator | Sunday 08 February 2026 04:07:21 +0000 (0:00:02.438) 0:01:47.619 ******* 2026-02-08 04:07:40.548808 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.548819 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.548829 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.548839 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.548866 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.548877 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.548887 | orchestrator | 2026-02-08 04:07:40.548898 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-08 04:07:40.548907 | orchestrator | Sunday 08 February 2026 04:07:23 +0000 (0:00:02.262) 0:01:49.882 ******* 2026-02-08 04:07:40.548916 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.548925 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.548933 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.548940 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.548948 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.548957 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.548964 | orchestrator | 2026-02-08 04:07:40.548971 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-08 04:07:40.548979 | orchestrator | Sunday 08 February 2026 04:07:25 +0000 (0:00:02.360) 0:01:52.243 ******* 2026-02-08 04:07:40.548987 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.548995 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.549002 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.549010 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.549018 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.549026 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.549033 | orchestrator | 2026-02-08 04:07:40.549041 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-08 04:07:40.549049 | orchestrator | Sunday 08 February 2026 04:07:28 +0000 (0:00:02.279) 0:01:54.522 ******* 2026-02-08 04:07:40.549057 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.549065 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.549073 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.549080 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.549088 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.549096 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.549105 | orchestrator | 2026-02-08 04:07:40.549113 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-08 04:07:40.549122 | orchestrator | Sunday 08 February 2026 04:07:30 +0000 (0:00:02.303) 0:01:56.826 ******* 2026-02-08 04:07:40.549131 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.549140 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.549148 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.549156 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.549163 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.549200 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.549208 | orchestrator | 2026-02-08 04:07:40.549217 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-08 04:07:40.549225 | orchestrator | Sunday 08 February 2026 04:07:32 +0000 (0:00:02.347) 0:01:59.173 ******* 2026-02-08 04:07:40.549233 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.549241 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.549249 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.549258 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.549267 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.549276 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.549285 | orchestrator | 2026-02-08 04:07:40.549291 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-08 04:07:40.549296 | orchestrator | Sunday 08 February 2026 04:07:35 +0000 (0:00:02.695) 0:02:01.869 ******* 2026-02-08 04:07:40.549301 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-08 04:07:40.549316 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:40.549322 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-08 04:07:40.549327 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:40.549332 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-08 04:07:40.549337 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:40.549342 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-08 04:07:40.549347 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:40.549352 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-08 04:07:40.549357 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:40.549362 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-08 04:07:40.549367 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:40.549372 | orchestrator | 2026-02-08 04:07:40.549378 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-08 04:07:40.549383 | orchestrator | Sunday 08 February 2026 04:07:37 +0000 (0:00:02.290) 0:02:04.159 ******* 2026-02-08 04:07:40.549408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:07:43.223078 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:07:43.223236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:07:43.223258 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:07:43.223270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-08 04:07:43.223311 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:07:43.223324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:43.223336 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:07:43.223347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:43.223358 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:07:43.223402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 04:07:43.223415 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:07:43.223426 | orchestrator | 2026-02-08 04:07:43.223438 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-08 04:07:43.223450 | orchestrator | Sunday 08 February 2026 04:07:40 +0000 (0:00:02.672) 0:02:06.831 ******* 2026-02-08 04:07:43.223462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:07:43.223483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:07:43.223495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-08 04:07:43.223512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:07:43.223532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:09:52.856738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-08 04:09:52.856928 | orchestrator | 2026-02-08 04:09:52.856960 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-08 04:09:52.856974 | orchestrator | Sunday 08 February 2026 04:07:43 +0000 (0:00:02.672) 0:02:09.504 ******* 2026-02-08 04:09:52.856986 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:09:52.856998 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:09:52.857009 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:09:52.857020 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:09:52.857031 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:09:52.857045 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:09:52.857063 | orchestrator | 2026-02-08 04:09:52.857078 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-08 04:09:52.857089 | orchestrator | Sunday 08 February 2026 04:07:44 +0000 (0:00:00.832) 0:02:10.336 ******* 2026-02-08 04:09:52.857135 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:09:52.857148 | orchestrator | 2026-02-08 04:09:52.857159 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-08 04:09:52.857170 | orchestrator | Sunday 08 February 2026 04:07:46 +0000 (0:00:02.012) 0:02:12.348 ******* 2026-02-08 04:09:52.857181 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:09:52.857191 | orchestrator | 2026-02-08 04:09:52.857202 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-08 04:09:52.857213 | orchestrator | Sunday 08 February 2026 04:07:48 +0000 (0:00:02.175) 0:02:14.524 ******* 2026-02-08 04:09:52.857224 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:09:52.857237 | orchestrator | 2026-02-08 04:09:52.857251 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-08 04:09:52.857265 | orchestrator | Sunday 08 February 2026 04:08:27 +0000 (0:00:38.984) 0:02:53.508 ******* 2026-02-08 04:09:52.857278 | orchestrator | 2026-02-08 04:09:52.857291 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-08 04:09:52.857304 | orchestrator | Sunday 08 February 2026 04:08:27 +0000 (0:00:00.105) 0:02:53.614 ******* 2026-02-08 04:09:52.857318 | orchestrator | 2026-02-08 04:09:52.857331 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-08 04:09:52.857344 | orchestrator | Sunday 08 February 2026 04:08:27 +0000 (0:00:00.086) 0:02:53.701 ******* 2026-02-08 04:09:52.857358 | orchestrator | 2026-02-08 04:09:52.857371 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-08 04:09:52.857384 | orchestrator | Sunday 08 February 2026 04:08:27 +0000 (0:00:00.084) 0:02:53.785 ******* 2026-02-08 04:09:52.857395 | orchestrator | 2026-02-08 04:09:52.857406 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-08 04:09:52.857417 | orchestrator | Sunday 08 February 2026 04:08:27 +0000 (0:00:00.090) 0:02:53.875 ******* 2026-02-08 04:09:52.857427 | orchestrator | 2026-02-08 04:09:52.857438 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-08 04:09:52.857449 | orchestrator | Sunday 08 February 2026 04:08:27 +0000 (0:00:00.075) 0:02:53.951 ******* 2026-02-08 04:09:52.857460 | orchestrator | 2026-02-08 04:09:52.857470 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-08 04:09:52.857487 | orchestrator | Sunday 08 February 2026 04:08:27 +0000 (0:00:00.073) 0:02:54.024 ******* 2026-02-08 04:09:52.857506 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:09:52.857524 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:09:52.857560 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:09:52.857580 | orchestrator | 2026-02-08 04:09:52.857598 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-08 04:09:52.857618 | orchestrator | Sunday 08 February 2026 04:08:51 +0000 (0:00:23.936) 0:03:17.961 ******* 2026-02-08 04:09:52.857637 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:09:52.857655 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:09:52.857693 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:09:52.857712 | orchestrator | 2026-02-08 04:09:52.857726 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:09:52.857739 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-08 04:09:52.857751 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-08 04:09:52.857762 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-08 04:09:52.857773 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-08 04:09:52.857804 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-08 04:09:52.857816 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-08 04:09:52.857828 | orchestrator | 2026-02-08 04:09:52.857839 | orchestrator | 2026-02-08 04:09:52.857850 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:09:52.857861 | orchestrator | Sunday 08 February 2026 04:09:52 +0000 (0:01:00.620) 0:04:18.582 ******* 2026-02-08 04:09:52.857872 | orchestrator | =============================================================================== 2026-02-08 04:09:52.857882 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 60.62s 2026-02-08 04:09:52.857893 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.98s 2026-02-08 04:09:52.857904 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.94s 2026-02-08 04:09:52.857914 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.02s 2026-02-08 04:09:52.857925 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.53s 2026-02-08 04:09:52.857936 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.60s 2026-02-08 04:09:52.857946 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.81s 2026-02-08 04:09:52.857957 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.80s 2026-02-08 04:09:52.857968 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.69s 2026-02-08 04:09:52.857978 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.53s 2026-02-08 04:09:52.857989 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.22s 2026-02-08 04:09:52.858000 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.19s 2026-02-08 04:09:52.858010 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.17s 2026-02-08 04:09:52.858196 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.92s 2026-02-08 04:09:52.858213 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.85s 2026-02-08 04:09:52.858224 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 2.81s 2026-02-08 04:09:52.858235 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.79s 2026-02-08 04:09:52.858245 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 2.70s 2026-02-08 04:09:52.858256 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.67s 2026-02-08 04:09:52.858267 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.67s 2026-02-08 04:09:55.454255 | orchestrator | 2026-02-08 04:09:55 | INFO  | Task 779b51dc-1ec8-4832-be4c-11c55d4575da (nova) was prepared for execution. 2026-02-08 04:09:55.454360 | orchestrator | 2026-02-08 04:09:55 | INFO  | It takes a moment until task 779b51dc-1ec8-4832-be4c-11c55d4575da (nova) has been started and output is visible here. 2026-02-08 04:11:49.975682 | orchestrator | 2026-02-08 04:11:49.975792 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:11:49.975804 | orchestrator | 2026-02-08 04:11:49.975810 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-08 04:11:49.975815 | orchestrator | Sunday 08 February 2026 04:09:59 +0000 (0:00:00.266) 0:00:00.266 ******* 2026-02-08 04:11:49.975820 | orchestrator | changed: [testbed-manager] 2026-02-08 04:11:49.975826 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.975831 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:11:49.975836 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:11:49.975841 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:11:49.975846 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:11:49.975850 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:11:49.975855 | orchestrator | 2026-02-08 04:11:49.975860 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:11:49.975877 | orchestrator | Sunday 08 February 2026 04:10:00 +0000 (0:00:00.820) 0:00:01.086 ******* 2026-02-08 04:11:49.975882 | orchestrator | changed: [testbed-manager] 2026-02-08 04:11:49.975887 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.975891 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:11:49.975896 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:11:49.975900 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:11:49.975905 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:11:49.975909 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:11:49.975914 | orchestrator | 2026-02-08 04:11:49.975918 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:11:49.975923 | orchestrator | Sunday 08 February 2026 04:10:01 +0000 (0:00:01.127) 0:00:02.214 ******* 2026-02-08 04:11:49.975928 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-08 04:11:49.975933 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-08 04:11:49.975937 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-08 04:11:49.975942 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-08 04:11:49.975947 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-08 04:11:49.975951 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-08 04:11:49.975956 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-08 04:11:49.975960 | orchestrator | 2026-02-08 04:11:49.975965 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-08 04:11:49.975969 | orchestrator | 2026-02-08 04:11:49.975974 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-08 04:11:49.975979 | orchestrator | Sunday 08 February 2026 04:10:02 +0000 (0:00:00.768) 0:00:02.983 ******* 2026-02-08 04:11:49.975986 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:11:49.975994 | orchestrator | 2026-02-08 04:11:49.976001 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-08 04:11:49.976007 | orchestrator | Sunday 08 February 2026 04:10:03 +0000 (0:00:00.787) 0:00:03.770 ******* 2026-02-08 04:11:49.976015 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-08 04:11:49.976022 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-08 04:11:49.976028 | orchestrator | 2026-02-08 04:11:49.976034 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-08 04:11:49.976041 | orchestrator | Sunday 08 February 2026 04:10:07 +0000 (0:00:03.918) 0:00:07.689 ******* 2026-02-08 04:11:49.976053 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-08 04:11:49.976090 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-08 04:11:49.976097 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.976108 | orchestrator | 2026-02-08 04:11:49.976115 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-08 04:11:49.976146 | orchestrator | Sunday 08 February 2026 04:10:11 +0000 (0:00:04.099) 0:00:11.788 ******* 2026-02-08 04:11:49.976154 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.976161 | orchestrator | 2026-02-08 04:11:49.976167 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-08 04:11:49.976173 | orchestrator | Sunday 08 February 2026 04:10:12 +0000 (0:00:00.709) 0:00:12.498 ******* 2026-02-08 04:11:49.976180 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.976187 | orchestrator | 2026-02-08 04:11:49.976194 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-08 04:11:49.976201 | orchestrator | Sunday 08 February 2026 04:10:13 +0000 (0:00:01.332) 0:00:13.831 ******* 2026-02-08 04:11:49.976208 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.976216 | orchestrator | 2026-02-08 04:11:49.976224 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-08 04:11:49.976232 | orchestrator | Sunday 08 February 2026 04:10:16 +0000 (0:00:02.749) 0:00:16.580 ******* 2026-02-08 04:11:49.976240 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:11:49.976248 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976256 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976263 | orchestrator | 2026-02-08 04:11:49.976272 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-08 04:11:49.976280 | orchestrator | Sunday 08 February 2026 04:10:16 +0000 (0:00:00.296) 0:00:16.877 ******* 2026-02-08 04:11:49.976287 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:11:49.976294 | orchestrator | 2026-02-08 04:11:49.976302 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-08 04:11:49.976309 | orchestrator | Sunday 08 February 2026 04:10:46 +0000 (0:00:30.328) 0:00:47.206 ******* 2026-02-08 04:11:49.976316 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.976322 | orchestrator | 2026-02-08 04:11:49.976329 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-08 04:11:49.976337 | orchestrator | Sunday 08 February 2026 04:11:00 +0000 (0:00:13.337) 0:01:00.544 ******* 2026-02-08 04:11:49.976344 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:11:49.976351 | orchestrator | 2026-02-08 04:11:49.976358 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-08 04:11:49.976366 | orchestrator | Sunday 08 February 2026 04:11:12 +0000 (0:00:12.116) 0:01:12.660 ******* 2026-02-08 04:11:49.976390 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:11:49.976398 | orchestrator | 2026-02-08 04:11:49.976405 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-08 04:11:49.976413 | orchestrator | Sunday 08 February 2026 04:11:12 +0000 (0:00:00.683) 0:01:13.344 ******* 2026-02-08 04:11:49.976421 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:11:49.976429 | orchestrator | 2026-02-08 04:11:49.976436 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-08 04:11:49.976443 | orchestrator | Sunday 08 February 2026 04:11:13 +0000 (0:00:00.482) 0:01:13.826 ******* 2026-02-08 04:11:49.976452 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:11:49.976459 | orchestrator | 2026-02-08 04:11:49.976466 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-08 04:11:49.976480 | orchestrator | Sunday 08 February 2026 04:11:14 +0000 (0:00:00.742) 0:01:14.568 ******* 2026-02-08 04:11:49.976488 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:11:49.976496 | orchestrator | 2026-02-08 04:11:49.976504 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-08 04:11:49.976512 | orchestrator | Sunday 08 February 2026 04:11:31 +0000 (0:00:16.949) 0:01:31.518 ******* 2026-02-08 04:11:49.976519 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:11:49.976527 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976536 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976544 | orchestrator | 2026-02-08 04:11:49.976560 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-08 04:11:49.976568 | orchestrator | 2026-02-08 04:11:49.976575 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-08 04:11:49.976582 | orchestrator | Sunday 08 February 2026 04:11:31 +0000 (0:00:00.345) 0:01:31.864 ******* 2026-02-08 04:11:49.976590 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:11:49.976598 | orchestrator | 2026-02-08 04:11:49.976605 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-08 04:11:49.976612 | orchestrator | Sunday 08 February 2026 04:11:32 +0000 (0:00:00.803) 0:01:32.668 ******* 2026-02-08 04:11:49.976618 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976623 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976628 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.976632 | orchestrator | 2026-02-08 04:11:49.976637 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-08 04:11:49.976641 | orchestrator | Sunday 08 February 2026 04:11:34 +0000 (0:00:01.939) 0:01:34.607 ******* 2026-02-08 04:11:49.976646 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976650 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976655 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.976659 | orchestrator | 2026-02-08 04:11:49.976663 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-08 04:11:49.976668 | orchestrator | Sunday 08 February 2026 04:11:36 +0000 (0:00:02.124) 0:01:36.732 ******* 2026-02-08 04:11:49.976672 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:11:49.976677 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976681 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976686 | orchestrator | 2026-02-08 04:11:49.976690 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-08 04:11:49.976695 | orchestrator | Sunday 08 February 2026 04:11:36 +0000 (0:00:00.625) 0:01:37.357 ******* 2026-02-08 04:11:49.976699 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-08 04:11:49.976704 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976708 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-08 04:11:49.976713 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976717 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-08 04:11:49.976722 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-08 04:11:49.976727 | orchestrator | 2026-02-08 04:11:49.976731 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-08 04:11:49.976736 | orchestrator | Sunday 08 February 2026 04:11:44 +0000 (0:00:07.526) 0:01:44.884 ******* 2026-02-08 04:11:49.976740 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:11:49.976744 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976749 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976753 | orchestrator | 2026-02-08 04:11:49.976758 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-08 04:11:49.976762 | orchestrator | Sunday 08 February 2026 04:11:44 +0000 (0:00:00.348) 0:01:45.233 ******* 2026-02-08 04:11:49.976767 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-08 04:11:49.976771 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:11:49.976776 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-08 04:11:49.976780 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976785 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-08 04:11:49.976789 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976794 | orchestrator | 2026-02-08 04:11:49.976798 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-08 04:11:49.976803 | orchestrator | Sunday 08 February 2026 04:11:45 +0000 (0:00:01.115) 0:01:46.348 ******* 2026-02-08 04:11:49.976807 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976811 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976820 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.976824 | orchestrator | 2026-02-08 04:11:49.976829 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-08 04:11:49.976833 | orchestrator | Sunday 08 February 2026 04:11:46 +0000 (0:00:00.487) 0:01:46.836 ******* 2026-02-08 04:11:49.976838 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976842 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976847 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:11:49.976851 | orchestrator | 2026-02-08 04:11:49.976856 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-08 04:11:49.976860 | orchestrator | Sunday 08 February 2026 04:11:47 +0000 (0:00:01.060) 0:01:47.896 ******* 2026-02-08 04:11:49.976865 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:11:49.976869 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:11:49.976879 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:13:07.830318 | orchestrator | 2026-02-08 04:13:07.830401 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-08 04:13:07.830408 | orchestrator | Sunday 08 February 2026 04:11:49 +0000 (0:00:02.525) 0:01:50.421 ******* 2026-02-08 04:13:07.830413 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:07.830418 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:07.830422 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:13:07.830427 | orchestrator | 2026-02-08 04:13:07.830431 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-08 04:13:07.830436 | orchestrator | Sunday 08 February 2026 04:12:10 +0000 (0:00:20.816) 0:02:11.238 ******* 2026-02-08 04:13:07.830440 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:07.830444 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:07.830447 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:13:07.830451 | orchestrator | 2026-02-08 04:13:07.830466 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-08 04:13:07.830471 | orchestrator | Sunday 08 February 2026 04:12:23 +0000 (0:00:12.348) 0:02:23.587 ******* 2026-02-08 04:13:07.830474 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:13:07.830478 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:07.830482 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:07.830486 | orchestrator | 2026-02-08 04:13:07.830490 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-08 04:13:07.830494 | orchestrator | Sunday 08 February 2026 04:12:24 +0000 (0:00:01.113) 0:02:24.700 ******* 2026-02-08 04:13:07.830498 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:07.830501 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:07.830505 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:13:07.830509 | orchestrator | 2026-02-08 04:13:07.830513 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-08 04:13:07.830517 | orchestrator | Sunday 08 February 2026 04:12:36 +0000 (0:00:12.473) 0:02:37.174 ******* 2026-02-08 04:13:07.830521 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:13:07.830525 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:07.830529 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:07.830533 | orchestrator | 2026-02-08 04:13:07.830537 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-08 04:13:07.830541 | orchestrator | Sunday 08 February 2026 04:12:38 +0000 (0:00:01.407) 0:02:38.581 ******* 2026-02-08 04:13:07.830544 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:13:07.830548 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:07.830552 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:07.830556 | orchestrator | 2026-02-08 04:13:07.830560 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-08 04:13:07.830564 | orchestrator | 2026-02-08 04:13:07.830567 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-08 04:13:07.830571 | orchestrator | Sunday 08 February 2026 04:12:38 +0000 (0:00:00.346) 0:02:38.927 ******* 2026-02-08 04:13:07.830575 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:13:07.830597 | orchestrator | 2026-02-08 04:13:07.830601 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-08 04:13:07.830605 | orchestrator | Sunday 08 February 2026 04:12:39 +0000 (0:00:00.867) 0:02:39.795 ******* 2026-02-08 04:13:07.830609 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-08 04:13:07.830613 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-08 04:13:07.830617 | orchestrator | 2026-02-08 04:13:07.830621 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-08 04:13:07.830625 | orchestrator | Sunday 08 February 2026 04:12:42 +0000 (0:00:03.245) 0:02:43.041 ******* 2026-02-08 04:13:07.830629 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-08 04:13:07.830635 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-08 04:13:07.830639 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-08 04:13:07.830643 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-08 04:13:07.830647 | orchestrator | 2026-02-08 04:13:07.830651 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-08 04:13:07.830667 | orchestrator | Sunday 08 February 2026 04:12:48 +0000 (0:00:06.326) 0:02:49.367 ******* 2026-02-08 04:13:07.830671 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:13:07.830676 | orchestrator | 2026-02-08 04:13:07.830686 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-08 04:13:07.830690 | orchestrator | Sunday 08 February 2026 04:12:52 +0000 (0:00:03.124) 0:02:52.492 ******* 2026-02-08 04:13:07.830694 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:13:07.830698 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-08 04:13:07.830702 | orchestrator | 2026-02-08 04:13:07.830705 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-08 04:13:07.830709 | orchestrator | Sunday 08 February 2026 04:12:55 +0000 (0:00:03.925) 0:02:56.418 ******* 2026-02-08 04:13:07.830713 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:13:07.830717 | orchestrator | 2026-02-08 04:13:07.830721 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-08 04:13:07.830725 | orchestrator | Sunday 08 February 2026 04:12:59 +0000 (0:00:03.236) 0:02:59.655 ******* 2026-02-08 04:13:07.830729 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-08 04:13:07.830733 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-08 04:13:07.830737 | orchestrator | 2026-02-08 04:13:07.830740 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-08 04:13:07.830754 | orchestrator | Sunday 08 February 2026 04:13:06 +0000 (0:00:07.234) 0:03:06.889 ******* 2026-02-08 04:13:07.830761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:07.830829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:07.830839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:07.830849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:12.515857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:12.516022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:12.516125 | orchestrator | 2026-02-08 04:13:12.516152 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-08 04:13:12.516173 | orchestrator | Sunday 08 February 2026 04:13:07 +0000 (0:00:01.390) 0:03:08.279 ******* 2026-02-08 04:13:12.516191 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:13:12.516207 | orchestrator | 2026-02-08 04:13:12.516222 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-08 04:13:12.516239 | orchestrator | Sunday 08 February 2026 04:13:07 +0000 (0:00:00.133) 0:03:08.412 ******* 2026-02-08 04:13:12.516255 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:13:12.516271 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:12.516286 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:12.516301 | orchestrator | 2026-02-08 04:13:12.516316 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-08 04:13:12.516334 | orchestrator | Sunday 08 February 2026 04:13:08 +0000 (0:00:00.316) 0:03:08.729 ******* 2026-02-08 04:13:12.516349 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:13:12.516366 | orchestrator | 2026-02-08 04:13:12.516382 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-08 04:13:12.516399 | orchestrator | Sunday 08 February 2026 04:13:08 +0000 (0:00:00.714) 0:03:09.444 ******* 2026-02-08 04:13:12.516416 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:13:12.516435 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:12.516452 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:12.516466 | orchestrator | 2026-02-08 04:13:12.516482 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-08 04:13:12.516499 | orchestrator | Sunday 08 February 2026 04:13:09 +0000 (0:00:00.538) 0:03:09.983 ******* 2026-02-08 04:13:12.516520 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:13:12.516536 | orchestrator | 2026-02-08 04:13:12.516552 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-08 04:13:12.516572 | orchestrator | Sunday 08 February 2026 04:13:10 +0000 (0:00:00.594) 0:03:10.577 ******* 2026-02-08 04:13:12.516593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:12.516669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:12.516692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:12.516712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:12.516729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:12.516744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:12.516769 | orchestrator | 2026-02-08 04:13:12.516795 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-08 04:13:14.284210 | orchestrator | Sunday 08 February 2026 04:13:12 +0000 (0:00:02.384) 0:03:12.962 ******* 2026-02-08 04:13:14.284344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 04:13:14.284368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:13:14.284382 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:13:14.284396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 04:13:14.284408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:13:14.284443 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:14.284484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 04:13:14.284498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:13:14.284510 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:14.284521 | orchestrator | 2026-02-08 04:13:14.284533 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-08 04:13:14.284545 | orchestrator | Sunday 08 February 2026 04:13:13 +0000 (0:00:00.939) 0:03:13.902 ******* 2026-02-08 04:13:14.284559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 04:13:14.284571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:13:14.284591 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:13:14.284617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 04:13:16.591149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:13:16.591253 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:16.591274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 04:13:16.591289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:13:16.591326 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:16.591338 | orchestrator | 2026-02-08 04:13:16.591351 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-08 04:13:16.591363 | orchestrator | Sunday 08 February 2026 04:13:14 +0000 (0:00:00.833) 0:03:14.736 ******* 2026-02-08 04:13:16.591389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:16.591424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:16.591439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:16.591459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:16.591476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:16.591497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:23.376888 | orchestrator | 2026-02-08 04:13:23.376992 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-08 04:13:23.377007 | orchestrator | Sunday 08 February 2026 04:13:16 +0000 (0:00:02.299) 0:03:17.036 ******* 2026-02-08 04:13:23.377023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:23.377077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:23.377126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:23.377157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:23.377170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:23.377180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:23.377198 | orchestrator | 2026-02-08 04:13:23.377208 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-08 04:13:23.377218 | orchestrator | Sunday 08 February 2026 04:13:22 +0000 (0:00:06.021) 0:03:23.057 ******* 2026-02-08 04:13:23.377229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 04:13:23.377245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:13:23.377256 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:13:23.377276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 04:13:27.752665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:13:27.752816 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:27.752851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-08 04:13:27.752878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:13:27.752898 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:27.752918 | orchestrator | 2026-02-08 04:13:27.752939 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-08 04:13:27.752960 | orchestrator | Sunday 08 February 2026 04:13:23 +0000 (0:00:00.770) 0:03:23.827 ******* 2026-02-08 04:13:27.752989 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:13:27.753001 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:13:27.753012 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:13:27.753023 | orchestrator | 2026-02-08 04:13:27.753066 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-08 04:13:27.753079 | orchestrator | Sunday 08 February 2026 04:13:24 +0000 (0:00:01.551) 0:03:25.378 ******* 2026-02-08 04:13:27.753090 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:13:27.753101 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:13:27.753118 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:13:27.753136 | orchestrator | 2026-02-08 04:13:27.753154 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-08 04:13:27.753173 | orchestrator | Sunday 08 February 2026 04:13:25 +0000 (0:00:00.363) 0:03:25.741 ******* 2026-02-08 04:13:27.753223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:27.753262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:27.753286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-08 04:13:27.753302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:27.753316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:13:27.753345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:06.775829 | orchestrator | 2026-02-08 04:14:06.775907 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-08 04:14:06.775915 | orchestrator | Sunday 08 February 2026 04:13:27 +0000 (0:00:02.008) 0:03:27.750 ******* 2026-02-08 04:14:06.775919 | orchestrator | 2026-02-08 04:14:06.775924 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-08 04:14:06.775929 | orchestrator | Sunday 08 February 2026 04:13:27 +0000 (0:00:00.143) 0:03:27.893 ******* 2026-02-08 04:14:06.775934 | orchestrator | 2026-02-08 04:14:06.775938 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-08 04:14:06.775942 | orchestrator | Sunday 08 February 2026 04:13:27 +0000 (0:00:00.143) 0:03:28.037 ******* 2026-02-08 04:14:06.775947 | orchestrator | 2026-02-08 04:14:06.775951 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-08 04:14:06.775955 | orchestrator | Sunday 08 February 2026 04:13:27 +0000 (0:00:00.162) 0:03:28.199 ******* 2026-02-08 04:14:06.775960 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:14:06.775965 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:14:06.775969 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:14:06.775973 | orchestrator | 2026-02-08 04:14:06.775977 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-08 04:14:06.775982 | orchestrator | Sunday 08 February 2026 04:13:49 +0000 (0:00:22.023) 0:03:50.223 ******* 2026-02-08 04:14:06.775986 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:14:06.775990 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:14:06.775994 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:14:06.775998 | orchestrator | 2026-02-08 04:14:06.776002 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-08 04:14:06.776007 | orchestrator | 2026-02-08 04:14:06.776011 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-08 04:14:06.776015 | orchestrator | Sunday 08 February 2026 04:13:54 +0000 (0:00:05.026) 0:03:55.250 ******* 2026-02-08 04:14:06.776053 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:14:06.776060 | orchestrator | 2026-02-08 04:14:06.776064 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-08 04:14:06.776069 | orchestrator | Sunday 08 February 2026 04:13:56 +0000 (0:00:01.458) 0:03:56.708 ******* 2026-02-08 04:14:06.776073 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:06.776077 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:14:06.776082 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:14:06.776086 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:06.776090 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:06.776094 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:06.776098 | orchestrator | 2026-02-08 04:14:06.776102 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-08 04:14:06.776107 | orchestrator | Sunday 08 February 2026 04:13:57 +0000 (0:00:00.817) 0:03:57.526 ******* 2026-02-08 04:14:06.776111 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:06.776115 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:06.776119 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:06.776148 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:14:06.776154 | orchestrator | 2026-02-08 04:14:06.776158 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-08 04:14:06.776162 | orchestrator | Sunday 08 February 2026 04:13:57 +0000 (0:00:00.880) 0:03:58.407 ******* 2026-02-08 04:14:06.776167 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-08 04:14:06.776171 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-08 04:14:06.776175 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-08 04:14:06.776179 | orchestrator | 2026-02-08 04:14:06.776183 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-08 04:14:06.776188 | orchestrator | Sunday 08 February 2026 04:13:58 +0000 (0:00:00.918) 0:03:59.325 ******* 2026-02-08 04:14:06.776192 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-08 04:14:06.776196 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-08 04:14:06.776200 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-08 04:14:06.776204 | orchestrator | 2026-02-08 04:14:06.776208 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-08 04:14:06.776212 | orchestrator | Sunday 08 February 2026 04:14:00 +0000 (0:00:01.183) 0:04:00.509 ******* 2026-02-08 04:14:06.776216 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-08 04:14:06.776221 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:06.776225 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-08 04:14:06.776229 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:14:06.776233 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-08 04:14:06.776237 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:14:06.776241 | orchestrator | 2026-02-08 04:14:06.776245 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-08 04:14:06.776249 | orchestrator | Sunday 08 February 2026 04:14:00 +0000 (0:00:00.551) 0:04:01.060 ******* 2026-02-08 04:14:06.776253 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-08 04:14:06.776257 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-08 04:14:06.776261 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 04:14:06.776265 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 04:14:06.776269 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:06.776273 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 04:14:06.776277 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 04:14:06.776282 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:06.776296 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-08 04:14:06.776300 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 04:14:06.776305 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 04:14:06.776309 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:06.776313 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-08 04:14:06.776317 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-08 04:14:06.776321 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-08 04:14:06.776325 | orchestrator | 2026-02-08 04:14:06.776329 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-08 04:14:06.776333 | orchestrator | Sunday 08 February 2026 04:14:01 +0000 (0:00:01.266) 0:04:02.327 ******* 2026-02-08 04:14:06.776337 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:06.776341 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:06.776345 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:06.776354 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:14:06.776358 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:14:06.776362 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:14:06.776366 | orchestrator | 2026-02-08 04:14:06.776371 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-08 04:14:06.776375 | orchestrator | Sunday 08 February 2026 04:14:03 +0000 (0:00:01.178) 0:04:03.505 ******* 2026-02-08 04:14:06.776379 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:06.776383 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:06.776387 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:06.776391 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:14:06.776396 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:14:06.776401 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:14:06.776406 | orchestrator | 2026-02-08 04:14:06.776411 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-08 04:14:06.776416 | orchestrator | Sunday 08 February 2026 04:14:04 +0000 (0:00:01.774) 0:04:05.280 ******* 2026-02-08 04:14:06.776425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:14:06.776433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:14:06.776438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:14:06.776448 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:14:12.087688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:14:12.087805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:12.087855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.087875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:12.087893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:14:12.087911 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.087981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:12.088003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.088149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.088170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.088181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.088191 | orchestrator | 2026-02-08 04:14:12.088204 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-08 04:14:12.088215 | orchestrator | Sunday 08 February 2026 04:14:07 +0000 (0:00:02.351) 0:04:07.631 ******* 2026-02-08 04:14:12.088242 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:14:12.088262 | orchestrator | 2026-02-08 04:14:12.088279 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-08 04:14:12.088297 | orchestrator | Sunday 08 February 2026 04:14:08 +0000 (0:00:01.572) 0:04:09.203 ******* 2026-02-08 04:14:12.088329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570715 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570855 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570862 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570867 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:12.570891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:14.472559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:14.472665 | orchestrator | 2026-02-08 04:14:14.472682 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-08 04:14:14.472695 | orchestrator | Sunday 08 February 2026 04:14:12 +0000 (0:00:03.819) 0:04:13.022 ******* 2026-02-08 04:14:14.472709 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:14:14.472744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:14:14.472757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-08 04:14:14.472769 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:14.472801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:14:14.472820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:14:14.472833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:14:14.472852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:14:14.472864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-08 04:14:14.472875 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:14:14.472887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-08 04:14:14.472898 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:14:14.472918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-08 04:14:16.285481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:14:16.285587 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:16.285609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-08 04:14:16.285664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:14:16.285689 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:16.285708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-08 04:14:16.285726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:14:16.285744 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:16.285760 | orchestrator | 2026-02-08 04:14:16.285778 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-08 04:14:16.285799 | orchestrator | Sunday 08 February 2026 04:14:14 +0000 (0:00:01.991) 0:04:15.014 ******* 2026-02-08 04:14:16.285844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:14:16.285877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:14:16.285910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-08 04:14:16.285922 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:16.285934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:14:16.285946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:14:16.285958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-08 04:14:16.285972 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:14:16.286001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:14:29.000703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:14:29.000822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-08 04:14:29.000841 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:14:29.000856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-08 04:14:29.000869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:14:29.000880 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:29.000892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-08 04:14:29.000921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:14:29.000958 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:29.000989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-08 04:14:29.001002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:14:29.001014 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:29.001066 | orchestrator | 2026-02-08 04:14:29.001090 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-08 04:14:29.001110 | orchestrator | Sunday 08 February 2026 04:14:17 +0000 (0:00:02.611) 0:04:17.626 ******* 2026-02-08 04:14:29.001125 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:29.001136 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:29.001147 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:29.001159 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:14:29.001170 | orchestrator | 2026-02-08 04:14:29.001182 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-08 04:14:29.001193 | orchestrator | Sunday 08 February 2026 04:14:18 +0000 (0:00:01.027) 0:04:18.653 ******* 2026-02-08 04:14:29.001204 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 04:14:29.001216 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 04:14:29.001230 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 04:14:29.001242 | orchestrator | 2026-02-08 04:14:29.001256 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-08 04:14:29.001270 | orchestrator | Sunday 08 February 2026 04:14:19 +0000 (0:00:01.385) 0:04:20.038 ******* 2026-02-08 04:14:29.001283 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 04:14:29.001297 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 04:14:29.001310 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 04:14:29.001328 | orchestrator | 2026-02-08 04:14:29.001346 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-08 04:14:29.001365 | orchestrator | Sunday 08 February 2026 04:14:20 +0000 (0:00:01.164) 0:04:21.203 ******* 2026-02-08 04:14:29.001381 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:14:29.001400 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:14:29.001417 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:14:29.001435 | orchestrator | 2026-02-08 04:14:29.001454 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-08 04:14:29.001474 | orchestrator | Sunday 08 February 2026 04:14:21 +0000 (0:00:00.596) 0:04:21.800 ******* 2026-02-08 04:14:29.001488 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:14:29.001498 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:14:29.001509 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:14:29.001532 | orchestrator | 2026-02-08 04:14:29.001543 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-08 04:14:29.001554 | orchestrator | Sunday 08 February 2026 04:14:21 +0000 (0:00:00.591) 0:04:22.391 ******* 2026-02-08 04:14:29.001566 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-08 04:14:29.001577 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-08 04:14:29.001587 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-08 04:14:29.001598 | orchestrator | 2026-02-08 04:14:29.001609 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-08 04:14:29.001620 | orchestrator | Sunday 08 February 2026 04:14:23 +0000 (0:00:01.554) 0:04:23.946 ******* 2026-02-08 04:14:29.001631 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-08 04:14:29.001642 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-08 04:14:29.001653 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-08 04:14:29.001663 | orchestrator | 2026-02-08 04:14:29.001674 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-08 04:14:29.001685 | orchestrator | Sunday 08 February 2026 04:14:24 +0000 (0:00:01.253) 0:04:25.199 ******* 2026-02-08 04:14:29.001696 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-08 04:14:29.001706 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-08 04:14:29.001721 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-08 04:14:29.001747 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-08 04:14:29.001766 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-08 04:14:29.001783 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-08 04:14:29.001802 | orchestrator | 2026-02-08 04:14:29.001819 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-08 04:14:29.001848 | orchestrator | Sunday 08 February 2026 04:14:28 +0000 (0:00:03.933) 0:04:29.132 ******* 2026-02-08 04:14:29.001866 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:29.001884 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:14:29.001901 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:14:29.001919 | orchestrator | 2026-02-08 04:14:29.001950 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-08 04:14:44.476697 | orchestrator | Sunday 08 February 2026 04:14:28 +0000 (0:00:00.315) 0:04:29.448 ******* 2026-02-08 04:14:44.476818 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:44.476831 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:14:44.476839 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:14:44.476846 | orchestrator | 2026-02-08 04:14:44.476853 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-08 04:14:44.476861 | orchestrator | Sunday 08 February 2026 04:14:29 +0000 (0:00:00.599) 0:04:30.048 ******* 2026-02-08 04:14:44.476872 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:14:44.476910 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:14:44.476928 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:14:44.476936 | orchestrator | 2026-02-08 04:14:44.476944 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-08 04:14:44.476957 | orchestrator | Sunday 08 February 2026 04:14:30 +0000 (0:00:01.402) 0:04:31.450 ******* 2026-02-08 04:14:44.476970 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-08 04:14:44.476986 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-08 04:14:44.476999 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-08 04:14:44.477011 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-08 04:14:44.477191 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-08 04:14:44.477205 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-08 04:14:44.477215 | orchestrator | 2026-02-08 04:14:44.477225 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-08 04:14:44.477237 | orchestrator | Sunday 08 February 2026 04:14:34 +0000 (0:00:03.428) 0:04:34.878 ******* 2026-02-08 04:14:44.477248 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-08 04:14:44.477261 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-08 04:14:44.477273 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-08 04:14:44.477285 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-08 04:14:44.477298 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:14:44.477310 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-08 04:14:44.477321 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:14:44.477332 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-08 04:14:44.477344 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:14:44.477355 | orchestrator | 2026-02-08 04:14:44.477367 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-08 04:14:44.477380 | orchestrator | Sunday 08 February 2026 04:14:38 +0000 (0:00:03.888) 0:04:38.767 ******* 2026-02-08 04:14:44.477392 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:44.477404 | orchestrator | 2026-02-08 04:14:44.477417 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-08 04:14:44.477429 | orchestrator | Sunday 08 February 2026 04:14:38 +0000 (0:00:00.136) 0:04:38.904 ******* 2026-02-08 04:14:44.477440 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:44.477451 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:14:44.477462 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:14:44.477473 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:44.477484 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:44.477495 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:44.477506 | orchestrator | 2026-02-08 04:14:44.477517 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-08 04:14:44.477528 | orchestrator | Sunday 08 February 2026 04:14:39 +0000 (0:00:00.921) 0:04:39.826 ******* 2026-02-08 04:14:44.477541 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 04:14:44.477553 | orchestrator | 2026-02-08 04:14:44.477564 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-08 04:14:44.477576 | orchestrator | Sunday 08 February 2026 04:14:40 +0000 (0:00:00.887) 0:04:40.713 ******* 2026-02-08 04:14:44.477589 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:44.477600 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:14:44.477612 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:14:44.477623 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:44.477635 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:44.477647 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:44.477658 | orchestrator | 2026-02-08 04:14:44.477669 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-08 04:14:44.477681 | orchestrator | Sunday 08 February 2026 04:14:41 +0000 (0:00:00.952) 0:04:41.666 ******* 2026-02-08 04:14:44.477737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:14:44.477770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:14:44.477784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:14:44.477795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:44.477809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:44.477828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:44.477861 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:14:49.211638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:14:49.211719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:14:49.211729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:49.211737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:49.211746 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:49.211769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:49.211810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:49.211820 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:49.211827 | orchestrator | 2026-02-08 04:14:49.211836 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-08 04:14:49.211844 | orchestrator | Sunday 08 February 2026 04:14:44 +0000 (0:00:03.574) 0:04:45.241 ******* 2026-02-08 04:14:49.211852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:14:49.211861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:14:49.211871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:14:49.211897 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:14:56.675198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:14:56.675316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:14:56.675336 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:56.675366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:56.675399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:56.675432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:56.675445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:14:56.675456 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:56.675468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:56.675480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:56.675504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:14:56.675517 | orchestrator | 2026-02-08 04:14:56.675530 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-08 04:14:56.675543 | orchestrator | Sunday 08 February 2026 04:14:51 +0000 (0:00:06.714) 0:04:51.955 ******* 2026-02-08 04:14:56.675555 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:14:56.675567 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:14:56.675578 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:14:56.675588 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:14:56.675598 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:14:56.675609 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:14:56.675620 | orchestrator | 2026-02-08 04:14:56.675631 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-08 04:14:56.675642 | orchestrator | Sunday 08 February 2026 04:14:52 +0000 (0:00:01.349) 0:04:53.305 ******* 2026-02-08 04:14:56.675653 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-08 04:14:56.675672 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-08 04:15:14.134761 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-08 04:15:14.134893 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-08 04:15:14.134918 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-08 04:15:14.134940 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-08 04:15:14.134961 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-08 04:15:14.134975 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-08 04:15:14.134987 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:15:14.134999 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:15:14.135055 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-08 04:15:14.135077 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:15:14.135097 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-08 04:15:14.135117 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-08 04:15:14.135136 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-08 04:15:14.135151 | orchestrator | 2026-02-08 04:15:14.135163 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-08 04:15:14.135174 | orchestrator | Sunday 08 February 2026 04:14:56 +0000 (0:00:03.819) 0:04:57.124 ******* 2026-02-08 04:15:14.135185 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:15:14.135196 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:15:14.135208 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:15:14.135219 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:15:14.135230 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:15:14.135265 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:15:14.135278 | orchestrator | 2026-02-08 04:15:14.135291 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-08 04:15:14.135304 | orchestrator | Sunday 08 February 2026 04:14:57 +0000 (0:00:00.644) 0:04:57.768 ******* 2026-02-08 04:15:14.135317 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-08 04:15:14.135330 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-08 04:15:14.135343 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-08 04:15:14.135356 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-08 04:15:14.135369 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-08 04:15:14.135381 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-08 04:15:14.135400 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-08 04:15:14.135419 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-08 04:15:14.135437 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-08 04:15:14.135455 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-08 04:15:14.135474 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:15:14.135511 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-08 04:15:14.135531 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:15:14.135550 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-08 04:15:14.135570 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:15:14.135588 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-08 04:15:14.135606 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-08 04:15:14.135625 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-08 04:15:14.135642 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-08 04:15:14.135659 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-08 04:15:14.135675 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-08 04:15:14.135692 | orchestrator | 2026-02-08 04:15:14.135709 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-08 04:15:14.135726 | orchestrator | Sunday 08 February 2026 04:15:02 +0000 (0:00:05.376) 0:05:03.145 ******* 2026-02-08 04:15:14.135770 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-08 04:15:14.135790 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-08 04:15:14.135809 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-08 04:15:14.135829 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-08 04:15:14.135848 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-08 04:15:14.135882 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-08 04:15:14.135894 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-08 04:15:14.135904 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-08 04:15:14.135915 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-08 04:15:14.135926 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-08 04:15:14.135936 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-08 04:15:14.135947 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-08 04:15:14.135957 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-08 04:15:14.135968 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-08 04:15:14.135978 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:15:14.135989 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-08 04:15:14.136000 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:15:14.136039 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-08 04:15:14.136053 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:15:14.136064 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-08 04:15:14.136075 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-08 04:15:14.136086 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-08 04:15:14.136096 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-08 04:15:14.136107 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-08 04:15:14.136118 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-08 04:15:14.136129 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-08 04:15:14.136140 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-08 04:15:14.136159 | orchestrator | 2026-02-08 04:15:14.136177 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-08 04:15:14.136195 | orchestrator | Sunday 08 February 2026 04:15:09 +0000 (0:00:06.916) 0:05:10.061 ******* 2026-02-08 04:15:14.136213 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:15:14.136230 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:15:14.136250 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:15:14.136268 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:15:14.136288 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:15:14.136307 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:15:14.136325 | orchestrator | 2026-02-08 04:15:14.136343 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-08 04:15:14.136355 | orchestrator | Sunday 08 February 2026 04:15:10 +0000 (0:00:00.869) 0:05:10.931 ******* 2026-02-08 04:15:14.136366 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:15:14.136385 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:15:14.136397 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:15:14.136407 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:15:14.136418 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:15:14.136429 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:15:14.136439 | orchestrator | 2026-02-08 04:15:14.136450 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-08 04:15:14.136461 | orchestrator | Sunday 08 February 2026 04:15:11 +0000 (0:00:00.700) 0:05:11.632 ******* 2026-02-08 04:15:14.136472 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:15:14.136492 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:15:14.136502 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:15:14.136513 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:15:14.136523 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:15:14.136540 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:15:14.136558 | orchestrator | 2026-02-08 04:15:14.136577 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-08 04:15:14.136595 | orchestrator | Sunday 08 February 2026 04:15:13 +0000 (0:00:02.209) 0:05:13.841 ******* 2026-02-08 04:15:14.136632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:15:14.840170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:15:14.840274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-08 04:15:14.840292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:15:14.840324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:15:14.840358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-08 04:15:14.840371 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:15:14.840384 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:15:14.840417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-08 04:15:14.840430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-08 04:15:14.840442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-08 04:15:14.840454 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:15:14.840471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-08 04:15:14.840491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-08 04:15:14.840502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:15:14.840514 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:15:14.840534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:15:18.115997 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:15:18.116129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-08 04:15:18.116138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:15:18.116142 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:15:18.116146 | orchestrator | 2026-02-08 04:15:18.116151 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-08 04:15:18.116156 | orchestrator | Sunday 08 February 2026 04:15:14 +0000 (0:00:01.450) 0:05:15.292 ******* 2026-02-08 04:15:18.116161 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-08 04:15:18.116165 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-08 04:15:18.116183 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:15:18.116187 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-08 04:15:18.116191 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-08 04:15:18.116195 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:15:18.116199 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-08 04:15:18.116202 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-08 04:15:18.116206 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:15:18.116210 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-08 04:15:18.116214 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-08 04:15:18.116217 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:15:18.116232 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-08 04:15:18.116236 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-08 04:15:18.116240 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:15:18.116244 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-08 04:15:18.116247 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-08 04:15:18.116251 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:15:18.116255 | orchestrator | 2026-02-08 04:15:18.116259 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-08 04:15:18.116263 | orchestrator | Sunday 08 February 2026 04:15:15 +0000 (0:00:00.976) 0:05:16.268 ******* 2026-02-08 04:15:18.116268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:15:18.116285 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:15:18.116289 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-08 04:15:18.116297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:15:18.116305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:15:18.116309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:15:18.116313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:15:18.116321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-08 04:17:26.134161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-08 04:17:26.134257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:17:26.134276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:17:26.134281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:17:26.134286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:17:26.134301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-08 04:17:26.134305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:17:26.134313 | orchestrator | 2026-02-08 04:17:26.134319 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-08 04:17:26.134324 | orchestrator | Sunday 08 February 2026 04:15:18 +0000 (0:00:02.653) 0:05:18.922 ******* 2026-02-08 04:17:26.134328 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:17:26.134333 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:17:26.134337 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:17:26.134341 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:17:26.134345 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:17:26.134349 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:17:26.134353 | orchestrator | 2026-02-08 04:17:26.134357 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-08 04:17:26.134361 | orchestrator | Sunday 08 February 2026 04:15:19 +0000 (0:00:00.858) 0:05:19.781 ******* 2026-02-08 04:17:26.134365 | orchestrator | 2026-02-08 04:17:26.134369 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-08 04:17:26.134373 | orchestrator | Sunday 08 February 2026 04:15:19 +0000 (0:00:00.150) 0:05:19.931 ******* 2026-02-08 04:17:26.134377 | orchestrator | 2026-02-08 04:17:26.134381 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-08 04:17:26.134385 | orchestrator | Sunday 08 February 2026 04:15:19 +0000 (0:00:00.163) 0:05:20.095 ******* 2026-02-08 04:17:26.134389 | orchestrator | 2026-02-08 04:17:26.134393 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-08 04:17:26.134397 | orchestrator | Sunday 08 February 2026 04:15:19 +0000 (0:00:00.145) 0:05:20.240 ******* 2026-02-08 04:17:26.134401 | orchestrator | 2026-02-08 04:17:26.134405 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-08 04:17:26.134409 | orchestrator | Sunday 08 February 2026 04:15:19 +0000 (0:00:00.141) 0:05:20.382 ******* 2026-02-08 04:17:26.134413 | orchestrator | 2026-02-08 04:17:26.134417 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-08 04:17:26.134423 | orchestrator | Sunday 08 February 2026 04:15:20 +0000 (0:00:00.329) 0:05:20.711 ******* 2026-02-08 04:17:26.134427 | orchestrator | 2026-02-08 04:17:26.134431 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-08 04:17:26.134435 | orchestrator | Sunday 08 February 2026 04:15:20 +0000 (0:00:00.166) 0:05:20.877 ******* 2026-02-08 04:17:26.134439 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:17:26.134443 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:17:26.134447 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:17:26.134451 | orchestrator | 2026-02-08 04:17:26.134455 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-08 04:17:26.134459 | orchestrator | Sunday 08 February 2026 04:15:32 +0000 (0:00:11.932) 0:05:32.810 ******* 2026-02-08 04:17:26.134463 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:17:26.134467 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:17:26.134471 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:17:26.134475 | orchestrator | 2026-02-08 04:17:26.134479 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-08 04:17:26.134483 | orchestrator | Sunday 08 February 2026 04:15:52 +0000 (0:00:19.869) 0:05:52.679 ******* 2026-02-08 04:17:26.134487 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:17:26.134491 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:17:26.134494 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:17:26.134498 | orchestrator | 2026-02-08 04:17:26.134502 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-08 04:17:26.134506 | orchestrator | Sunday 08 February 2026 04:16:16 +0000 (0:00:24.336) 0:06:17.015 ******* 2026-02-08 04:17:26.134510 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:17:26.134517 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:17:26.134521 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:17:26.134525 | orchestrator | 2026-02-08 04:17:26.134529 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-08 04:17:26.134533 | orchestrator | Sunday 08 February 2026 04:16:59 +0000 (0:00:42.830) 0:06:59.846 ******* 2026-02-08 04:17:26.134537 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:17:26.134541 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:17:26.134545 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:17:26.134549 | orchestrator | 2026-02-08 04:17:26.134553 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-08 04:17:26.134557 | orchestrator | Sunday 08 February 2026 04:17:00 +0000 (0:00:00.812) 0:07:00.659 ******* 2026-02-08 04:17:26.134561 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:17:26.134565 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:17:26.134568 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:17:26.134572 | orchestrator | 2026-02-08 04:17:26.134576 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-08 04:17:26.134580 | orchestrator | Sunday 08 February 2026 04:17:01 +0000 (0:00:00.812) 0:07:01.472 ******* 2026-02-08 04:17:26.134584 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:17:26.134588 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:17:26.134592 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:17:26.134596 | orchestrator | 2026-02-08 04:17:26.134600 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-08 04:17:26.134608 | orchestrator | Sunday 08 February 2026 04:17:26 +0000 (0:00:25.110) 0:07:26.582 ******* 2026-02-08 04:18:37.212514 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:18:37.212645 | orchestrator | 2026-02-08 04:18:37.212662 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-08 04:18:37.212675 | orchestrator | Sunday 08 February 2026 04:17:26 +0000 (0:00:00.143) 0:07:26.726 ******* 2026-02-08 04:18:37.212685 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:18:37.212712 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:18:37.212723 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:37.212733 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:37.212752 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:37.212763 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-08 04:18:37.212775 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 04:18:37.212785 | orchestrator | 2026-02-08 04:18:37.212795 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-08 04:18:37.212805 | orchestrator | Sunday 08 February 2026 04:17:47 +0000 (0:00:21.717) 0:07:48.443 ******* 2026-02-08 04:18:37.212815 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:18:37.212825 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:18:37.212835 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:18:37.212845 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:37.212854 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:37.212864 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:37.212874 | orchestrator | 2026-02-08 04:18:37.212884 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-08 04:18:37.212894 | orchestrator | Sunday 08 February 2026 04:17:57 +0000 (0:00:09.462) 0:07:57.906 ******* 2026-02-08 04:18:37.212903 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:18:37.212913 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:18:37.212923 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:37.212932 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:37.212942 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:37.212952 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-08 04:18:37.212963 | orchestrator | 2026-02-08 04:18:37.212973 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-08 04:18:37.213040 | orchestrator | Sunday 08 February 2026 04:18:01 +0000 (0:00:04.297) 0:08:02.204 ******* 2026-02-08 04:18:37.213054 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 04:18:37.213065 | orchestrator | 2026-02-08 04:18:37.213077 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-08 04:18:37.213088 | orchestrator | Sunday 08 February 2026 04:18:15 +0000 (0:00:13.312) 0:08:15.517 ******* 2026-02-08 04:18:37.213099 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 04:18:37.213110 | orchestrator | 2026-02-08 04:18:37.213122 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-08 04:18:37.213147 | orchestrator | Sunday 08 February 2026 04:18:16 +0000 (0:00:01.544) 0:08:17.061 ******* 2026-02-08 04:18:37.213159 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:18:37.213170 | orchestrator | 2026-02-08 04:18:37.213181 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-08 04:18:37.213191 | orchestrator | Sunday 08 February 2026 04:18:18 +0000 (0:00:01.696) 0:08:18.757 ******* 2026-02-08 04:18:37.213203 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 04:18:37.213214 | orchestrator | 2026-02-08 04:18:37.213225 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-08 04:18:37.213236 | orchestrator | Sunday 08 February 2026 04:18:29 +0000 (0:00:11.532) 0:08:30.290 ******* 2026-02-08 04:18:37.213248 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:18:37.213260 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:18:37.213271 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:18:37.213282 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:18:37.213293 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:18:37.213304 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:18:37.213315 | orchestrator | 2026-02-08 04:18:37.213327 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-08 04:18:37.213338 | orchestrator | 2026-02-08 04:18:37.213349 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-08 04:18:37.213361 | orchestrator | Sunday 08 February 2026 04:18:31 +0000 (0:00:01.834) 0:08:32.124 ******* 2026-02-08 04:18:37.213372 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:18:37.213388 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:18:37.213411 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:18:37.213429 | orchestrator | 2026-02-08 04:18:37.213445 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-08 04:18:37.213459 | orchestrator | 2026-02-08 04:18:37.213474 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-08 04:18:37.213488 | orchestrator | Sunday 08 February 2026 04:18:32 +0000 (0:00:00.948) 0:08:33.073 ******* 2026-02-08 04:18:37.213504 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:37.213519 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:37.213535 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:37.213550 | orchestrator | 2026-02-08 04:18:37.213567 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-08 04:18:37.213584 | orchestrator | 2026-02-08 04:18:37.213600 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-08 04:18:37.213616 | orchestrator | Sunday 08 February 2026 04:18:33 +0000 (0:00:00.978) 0:08:34.052 ******* 2026-02-08 04:18:37.213628 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-08 04:18:37.213645 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-08 04:18:37.213671 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-08 04:18:37.213687 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-08 04:18:37.213703 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-08 04:18:37.213719 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-08 04:18:37.213733 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:18:37.213770 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-08 04:18:37.213800 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-08 04:18:37.213816 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-08 04:18:37.213831 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-08 04:18:37.213846 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-08 04:18:37.213861 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-08 04:18:37.213876 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:18:37.213891 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-08 04:18:37.213907 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-08 04:18:37.213923 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-08 04:18:37.213938 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-08 04:18:37.213954 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-08 04:18:37.213969 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-08 04:18:37.213986 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:18:37.214214 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-08 04:18:37.214234 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-08 04:18:37.214250 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-08 04:18:37.214266 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-08 04:18:37.214283 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-08 04:18:37.214299 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-08 04:18:37.214315 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:37.214331 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-08 04:18:37.214342 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-08 04:18:37.214351 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-08 04:18:37.214361 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-08 04:18:37.214370 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-08 04:18:37.214380 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-08 04:18:37.214389 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:37.214399 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-08 04:18:37.214408 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-08 04:18:37.214418 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-08 04:18:37.214428 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-08 04:18:37.214447 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-08 04:18:37.214457 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-08 04:18:37.214467 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:37.214477 | orchestrator | 2026-02-08 04:18:37.214486 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-08 04:18:37.214496 | orchestrator | 2026-02-08 04:18:37.214506 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-08 04:18:37.214515 | orchestrator | Sunday 08 February 2026 04:18:35 +0000 (0:00:01.547) 0:08:35.600 ******* 2026-02-08 04:18:37.214525 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-08 04:18:37.214534 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-08 04:18:37.214544 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:37.214554 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-08 04:18:37.214563 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-08 04:18:37.214573 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:37.214588 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-08 04:18:37.214616 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-08 04:18:37.214632 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:37.214648 | orchestrator | 2026-02-08 04:18:37.214664 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-08 04:18:37.214676 | orchestrator | 2026-02-08 04:18:37.214685 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-08 04:18:37.214695 | orchestrator | Sunday 08 February 2026 04:18:35 +0000 (0:00:00.601) 0:08:36.201 ******* 2026-02-08 04:18:37.214705 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:37.214714 | orchestrator | 2026-02-08 04:18:37.214724 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-08 04:18:37.214734 | orchestrator | 2026-02-08 04:18:37.214743 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-08 04:18:37.214753 | orchestrator | Sunday 08 February 2026 04:18:36 +0000 (0:00:00.981) 0:08:37.183 ******* 2026-02-08 04:18:37.214762 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:37.214772 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:37.214781 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:37.214791 | orchestrator | 2026-02-08 04:18:37.214801 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:18:37.214811 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:18:37.214823 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-08 04:18:37.214833 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-08 04:18:37.214856 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-08 04:18:37.665189 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-08 04:18:37.665267 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-08 04:18:37.665275 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-08 04:18:37.665282 | orchestrator | 2026-02-08 04:18:37.665288 | orchestrator | 2026-02-08 04:18:37.665294 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:18:37.665301 | orchestrator | Sunday 08 February 2026 04:18:37 +0000 (0:00:00.475) 0:08:37.658 ******* 2026-02-08 04:18:37.665308 | orchestrator | =============================================================================== 2026-02-08 04:18:37.665313 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.83s 2026-02-08 04:18:37.665319 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.33s 2026-02-08 04:18:37.665325 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.11s 2026-02-08 04:18:37.665330 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.34s 2026-02-08 04:18:37.665336 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.02s 2026-02-08 04:18:37.665342 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.72s 2026-02-08 04:18:37.665348 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.82s 2026-02-08 04:18:37.665353 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.87s 2026-02-08 04:18:37.665359 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.95s 2026-02-08 04:18:37.665385 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.34s 2026-02-08 04:18:37.665391 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.31s 2026-02-08 04:18:37.665397 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.47s 2026-02-08 04:18:37.665403 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.35s 2026-02-08 04:18:37.665408 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.12s 2026-02-08 04:18:37.665425 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.93s 2026-02-08 04:18:37.665431 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.53s 2026-02-08 04:18:37.665437 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.46s 2026-02-08 04:18:37.665443 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.53s 2026-02-08 04:18:37.665448 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.23s 2026-02-08 04:18:37.665454 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.92s 2026-02-08 04:18:40.309091 | orchestrator | 2026-02-08 04:18:40 | INFO  | Task 24580afd-e4b2-400d-8913-5341cfd10e3c (horizon) was prepared for execution. 2026-02-08 04:18:40.309180 | orchestrator | 2026-02-08 04:18:40 | INFO  | It takes a moment until task 24580afd-e4b2-400d-8913-5341cfd10e3c (horizon) has been started and output is visible here. 2026-02-08 04:18:48.091882 | orchestrator | 2026-02-08 04:18:48.092079 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:18:48.092114 | orchestrator | 2026-02-08 04:18:48.092140 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:18:48.092157 | orchestrator | Sunday 08 February 2026 04:18:44 +0000 (0:00:00.301) 0:00:00.301 ******* 2026-02-08 04:18:48.092174 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:18:48.092191 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:18:48.092208 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:18:48.092225 | orchestrator | 2026-02-08 04:18:48.092241 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:18:48.092258 | orchestrator | Sunday 08 February 2026 04:18:45 +0000 (0:00:00.323) 0:00:00.625 ******* 2026-02-08 04:18:48.092274 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-08 04:18:48.092290 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-08 04:18:48.092309 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-08 04:18:48.092325 | orchestrator | 2026-02-08 04:18:48.092342 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-08 04:18:48.092359 | orchestrator | 2026-02-08 04:18:48.092376 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-08 04:18:48.092394 | orchestrator | Sunday 08 February 2026 04:18:45 +0000 (0:00:00.479) 0:00:01.105 ******* 2026-02-08 04:18:48.092415 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:18:48.092438 | orchestrator | 2026-02-08 04:18:48.092457 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-08 04:18:48.092474 | orchestrator | Sunday 08 February 2026 04:18:46 +0000 (0:00:00.537) 0:00:01.642 ******* 2026-02-08 04:18:48.092511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 04:18:48.092580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 04:18:48.092604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 04:18:48.092627 | orchestrator | 2026-02-08 04:18:48.092639 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-08 04:18:48.092651 | orchestrator | Sunday 08 February 2026 04:18:47 +0000 (0:00:01.161) 0:00:02.804 ******* 2026-02-08 04:18:48.092663 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:18:48.092674 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:18:48.092684 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:18:48.092694 | orchestrator | 2026-02-08 04:18:48.092704 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-08 04:18:48.092713 | orchestrator | Sunday 08 February 2026 04:18:47 +0000 (0:00:00.515) 0:00:03.320 ******* 2026-02-08 04:18:48.092729 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-08 04:18:54.389720 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-08 04:18:54.389860 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-08 04:18:54.389879 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-08 04:18:54.389890 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-08 04:18:54.389901 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-08 04:18:54.389912 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-08 04:18:54.389923 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-08 04:18:54.389935 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-08 04:18:54.389946 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-08 04:18:54.389957 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-08 04:18:54.389968 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-08 04:18:54.389979 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-08 04:18:54.390127 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-08 04:18:54.390177 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-08 04:18:54.390189 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-08 04:18:54.390200 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-08 04:18:54.390225 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-08 04:18:54.390249 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-08 04:18:54.390262 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-08 04:18:54.390275 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-08 04:18:54.390287 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-08 04:18:54.390300 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-08 04:18:54.390312 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-08 04:18:54.390327 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-08 04:18:54.390342 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-08 04:18:54.390355 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-08 04:18:54.390368 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-08 04:18:54.390381 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-08 04:18:54.390393 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-08 04:18:54.390406 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-08 04:18:54.390432 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-08 04:18:54.390445 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-08 04:18:54.390459 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-08 04:18:54.390472 | orchestrator | 2026-02-08 04:18:54.390486 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:18:54.390500 | orchestrator | Sunday 08 February 2026 04:18:48 +0000 (0:00:00.842) 0:00:04.162 ******* 2026-02-08 04:18:54.390513 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:18:54.390527 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:18:54.390539 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:18:54.390552 | orchestrator | 2026-02-08 04:18:54.390564 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:18:54.390576 | orchestrator | Sunday 08 February 2026 04:18:49 +0000 (0:00:00.339) 0:00:04.502 ******* 2026-02-08 04:18:54.390590 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.390604 | orchestrator | 2026-02-08 04:18:54.390638 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:18:54.390649 | orchestrator | Sunday 08 February 2026 04:18:49 +0000 (0:00:00.366) 0:00:04.868 ******* 2026-02-08 04:18:54.390671 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.390682 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:54.390693 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:54.390704 | orchestrator | 2026-02-08 04:18:54.390714 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:18:54.390725 | orchestrator | Sunday 08 February 2026 04:18:49 +0000 (0:00:00.323) 0:00:05.191 ******* 2026-02-08 04:18:54.390736 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:18:54.390747 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:18:54.390757 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:18:54.390768 | orchestrator | 2026-02-08 04:18:54.390779 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:18:54.390790 | orchestrator | Sunday 08 February 2026 04:18:50 +0000 (0:00:00.378) 0:00:05.569 ******* 2026-02-08 04:18:54.390801 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.390811 | orchestrator | 2026-02-08 04:18:54.390822 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:18:54.390833 | orchestrator | Sunday 08 February 2026 04:18:50 +0000 (0:00:00.124) 0:00:05.694 ******* 2026-02-08 04:18:54.390843 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.390855 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:54.390866 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:54.390876 | orchestrator | 2026-02-08 04:18:54.390887 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:18:54.390898 | orchestrator | Sunday 08 February 2026 04:18:50 +0000 (0:00:00.311) 0:00:06.006 ******* 2026-02-08 04:18:54.390909 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:18:54.390920 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:18:54.390931 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:18:54.390942 | orchestrator | 2026-02-08 04:18:54.390952 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:18:54.390963 | orchestrator | Sunday 08 February 2026 04:18:51 +0000 (0:00:00.521) 0:00:06.528 ******* 2026-02-08 04:18:54.390974 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.390984 | orchestrator | 2026-02-08 04:18:54.391021 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:18:54.391033 | orchestrator | Sunday 08 February 2026 04:18:51 +0000 (0:00:00.155) 0:00:06.684 ******* 2026-02-08 04:18:54.391043 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.391054 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:54.391065 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:54.391076 | orchestrator | 2026-02-08 04:18:54.391086 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:18:54.391097 | orchestrator | Sunday 08 February 2026 04:18:51 +0000 (0:00:00.343) 0:00:07.027 ******* 2026-02-08 04:18:54.391108 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:18:54.391119 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:18:54.391129 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:18:54.391140 | orchestrator | 2026-02-08 04:18:54.391151 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:18:54.391162 | orchestrator | Sunday 08 February 2026 04:18:51 +0000 (0:00:00.312) 0:00:07.340 ******* 2026-02-08 04:18:54.391173 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.391183 | orchestrator | 2026-02-08 04:18:54.391194 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:18:54.391205 | orchestrator | Sunday 08 February 2026 04:18:52 +0000 (0:00:00.139) 0:00:07.479 ******* 2026-02-08 04:18:54.391216 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.391226 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:54.391237 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:54.391248 | orchestrator | 2026-02-08 04:18:54.391259 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:18:54.391269 | orchestrator | Sunday 08 February 2026 04:18:52 +0000 (0:00:00.524) 0:00:08.004 ******* 2026-02-08 04:18:54.391287 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:18:54.391298 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:18:54.391309 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:18:54.391320 | orchestrator | 2026-02-08 04:18:54.391330 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:18:54.391341 | orchestrator | Sunday 08 February 2026 04:18:52 +0000 (0:00:00.335) 0:00:08.339 ******* 2026-02-08 04:18:54.391352 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.391363 | orchestrator | 2026-02-08 04:18:54.391373 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:18:54.391384 | orchestrator | Sunday 08 February 2026 04:18:53 +0000 (0:00:00.124) 0:00:08.464 ******* 2026-02-08 04:18:54.391395 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.391405 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:54.391416 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:54.391427 | orchestrator | 2026-02-08 04:18:54.391443 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:18:54.391454 | orchestrator | Sunday 08 February 2026 04:18:53 +0000 (0:00:00.303) 0:00:08.768 ******* 2026-02-08 04:18:54.391465 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:18:54.391475 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:18:54.391486 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:18:54.391512 | orchestrator | 2026-02-08 04:18:54.391524 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:18:54.391535 | orchestrator | Sunday 08 February 2026 04:18:53 +0000 (0:00:00.354) 0:00:09.122 ******* 2026-02-08 04:18:54.391546 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.391566 | orchestrator | 2026-02-08 04:18:54.391577 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:18:54.391588 | orchestrator | Sunday 08 February 2026 04:18:54 +0000 (0:00:00.341) 0:00:09.464 ******* 2026-02-08 04:18:54.391599 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:18:54.391610 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:18:54.391621 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:18:54.391631 | orchestrator | 2026-02-08 04:18:54.391642 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:18:54.391661 | orchestrator | Sunday 08 February 2026 04:18:54 +0000 (0:00:00.310) 0:00:09.775 ******* 2026-02-08 04:19:08.699310 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:19:08.699438 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:19:08.699460 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:19:08.699476 | orchestrator | 2026-02-08 04:19:08.699493 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:19:08.699509 | orchestrator | Sunday 08 February 2026 04:18:54 +0000 (0:00:00.350) 0:00:10.125 ******* 2026-02-08 04:19:08.699523 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.699538 | orchestrator | 2026-02-08 04:19:08.699551 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:19:08.699565 | orchestrator | Sunday 08 February 2026 04:18:54 +0000 (0:00:00.142) 0:00:10.268 ******* 2026-02-08 04:19:08.699578 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.699592 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:19:08.699606 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:19:08.699620 | orchestrator | 2026-02-08 04:19:08.699635 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:19:08.699649 | orchestrator | Sunday 08 February 2026 04:18:55 +0000 (0:00:00.313) 0:00:10.581 ******* 2026-02-08 04:19:08.699663 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:19:08.699678 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:19:08.699690 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:19:08.699707 | orchestrator | 2026-02-08 04:19:08.699723 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:19:08.699740 | orchestrator | Sunday 08 February 2026 04:18:55 +0000 (0:00:00.547) 0:00:11.129 ******* 2026-02-08 04:19:08.699756 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.699800 | orchestrator | 2026-02-08 04:19:08.699813 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:19:08.699824 | orchestrator | Sunday 08 February 2026 04:18:55 +0000 (0:00:00.132) 0:00:11.261 ******* 2026-02-08 04:19:08.699834 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.699844 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:19:08.699854 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:19:08.699865 | orchestrator | 2026-02-08 04:19:08.699875 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:19:08.699885 | orchestrator | Sunday 08 February 2026 04:18:56 +0000 (0:00:00.359) 0:00:11.621 ******* 2026-02-08 04:19:08.699895 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:19:08.699906 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:19:08.699916 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:19:08.699927 | orchestrator | 2026-02-08 04:19:08.699937 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:19:08.699948 | orchestrator | Sunday 08 February 2026 04:18:56 +0000 (0:00:00.326) 0:00:11.947 ******* 2026-02-08 04:19:08.699956 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.699965 | orchestrator | 2026-02-08 04:19:08.699973 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:19:08.699982 | orchestrator | Sunday 08 February 2026 04:18:56 +0000 (0:00:00.137) 0:00:12.084 ******* 2026-02-08 04:19:08.700017 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.700033 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:19:08.700047 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:19:08.700062 | orchestrator | 2026-02-08 04:19:08.700076 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-08 04:19:08.700090 | orchestrator | Sunday 08 February 2026 04:18:57 +0000 (0:00:00.533) 0:00:12.618 ******* 2026-02-08 04:19:08.700105 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:19:08.700119 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:19:08.700133 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:19:08.700145 | orchestrator | 2026-02-08 04:19:08.700159 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-08 04:19:08.700174 | orchestrator | Sunday 08 February 2026 04:18:57 +0000 (0:00:00.340) 0:00:12.959 ******* 2026-02-08 04:19:08.700189 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.700201 | orchestrator | 2026-02-08 04:19:08.700213 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-08 04:19:08.700227 | orchestrator | Sunday 08 February 2026 04:18:57 +0000 (0:00:00.138) 0:00:13.097 ******* 2026-02-08 04:19:08.700242 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.700257 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:19:08.700273 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:19:08.700350 | orchestrator | 2026-02-08 04:19:08.700361 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-08 04:19:08.700370 | orchestrator | Sunday 08 February 2026 04:18:58 +0000 (0:00:00.299) 0:00:13.397 ******* 2026-02-08 04:19:08.700378 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:19:08.700387 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:19:08.700396 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:19:08.700405 | orchestrator | 2026-02-08 04:19:08.700414 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-08 04:19:08.700423 | orchestrator | Sunday 08 February 2026 04:18:59 +0000 (0:00:01.855) 0:00:15.252 ******* 2026-02-08 04:19:08.700447 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-08 04:19:08.700457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-08 04:19:08.700466 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-08 04:19:08.700475 | orchestrator | 2026-02-08 04:19:08.700484 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-08 04:19:08.700493 | orchestrator | Sunday 08 February 2026 04:19:01 +0000 (0:00:01.886) 0:00:17.139 ******* 2026-02-08 04:19:08.700514 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-08 04:19:08.700525 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-08 04:19:08.700533 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-08 04:19:08.700542 | orchestrator | 2026-02-08 04:19:08.700551 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-08 04:19:08.700581 | orchestrator | Sunday 08 February 2026 04:19:03 +0000 (0:00:01.858) 0:00:18.997 ******* 2026-02-08 04:19:08.700591 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-08 04:19:08.700599 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-08 04:19:08.700608 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-08 04:19:08.700617 | orchestrator | 2026-02-08 04:19:08.700625 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-08 04:19:08.700634 | orchestrator | Sunday 08 February 2026 04:19:05 +0000 (0:00:01.572) 0:00:20.570 ******* 2026-02-08 04:19:08.700642 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.700651 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:19:08.700660 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:19:08.700668 | orchestrator | 2026-02-08 04:19:08.700677 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-08 04:19:08.700685 | orchestrator | Sunday 08 February 2026 04:19:05 +0000 (0:00:00.556) 0:00:21.127 ******* 2026-02-08 04:19:08.700694 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:08.700702 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:19:08.700711 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:19:08.700720 | orchestrator | 2026-02-08 04:19:08.700729 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-08 04:19:08.700737 | orchestrator | Sunday 08 February 2026 04:19:06 +0000 (0:00:00.319) 0:00:21.446 ******* 2026-02-08 04:19:08.700746 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:19:08.700757 | orchestrator | 2026-02-08 04:19:08.700772 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-08 04:19:08.700787 | orchestrator | Sunday 08 February 2026 04:19:06 +0000 (0:00:00.654) 0:00:22.101 ******* 2026-02-08 04:19:08.700817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 04:19:08.700860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 04:19:09.357427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 04:19:09.357542 | orchestrator | 2026-02-08 04:19:09.357558 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-08 04:19:09.357570 | orchestrator | Sunday 08 February 2026 04:19:08 +0000 (0:00:01.974) 0:00:24.076 ******* 2026-02-08 04:19:09.357598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 04:19:09.357612 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:09.357630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 04:19:09.357648 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:19:09.357667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 04:19:11.928435 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:19:11.928566 | orchestrator | 2026-02-08 04:19:11.928584 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-08 04:19:11.928597 | orchestrator | Sunday 08 February 2026 04:19:09 +0000 (0:00:00.662) 0:00:24.738 ******* 2026-02-08 04:19:11.928629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 04:19:11.928645 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:11.928680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 04:19:11.928704 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:19:11.928724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 04:19:11.928736 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:19:11.928747 | orchestrator | 2026-02-08 04:19:11.928758 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-08 04:19:11.928769 | orchestrator | Sunday 08 February 2026 04:19:10 +0000 (0:00:00.878) 0:00:25.617 ******* 2026-02-08 04:19:11.928790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 04:19:57.682257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 04:19:57.682370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 04:19:57.682391 | orchestrator | 2026-02-08 04:19:57.682396 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-08 04:19:57.682402 | orchestrator | Sunday 08 February 2026 04:19:11 +0000 (0:00:01.695) 0:00:27.313 ******* 2026-02-08 04:19:57.682407 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:19:57.682412 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:19:57.682416 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:19:57.682421 | orchestrator | 2026-02-08 04:19:57.682425 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-08 04:19:57.682429 | orchestrator | Sunday 08 February 2026 04:19:12 +0000 (0:00:00.403) 0:00:27.717 ******* 2026-02-08 04:19:57.682434 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:19:57.682438 | orchestrator | 2026-02-08 04:19:57.682442 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-08 04:19:57.682447 | orchestrator | Sunday 08 February 2026 04:19:12 +0000 (0:00:00.560) 0:00:28.278 ******* 2026-02-08 04:19:57.682451 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:19:57.682455 | orchestrator | 2026-02-08 04:19:57.682459 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-08 04:19:57.682464 | orchestrator | Sunday 08 February 2026 04:19:15 +0000 (0:00:02.194) 0:00:30.472 ******* 2026-02-08 04:19:57.682468 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:19:57.682472 | orchestrator | 2026-02-08 04:19:57.682476 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-08 04:19:57.682480 | orchestrator | Sunday 08 February 2026 04:19:17 +0000 (0:00:02.579) 0:00:33.052 ******* 2026-02-08 04:19:57.682484 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:19:57.682488 | orchestrator | 2026-02-08 04:19:57.682492 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-08 04:19:57.682497 | orchestrator | Sunday 08 February 2026 04:19:33 +0000 (0:00:15.469) 0:00:48.521 ******* 2026-02-08 04:19:57.682501 | orchestrator | 2026-02-08 04:19:57.682505 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-08 04:19:57.682509 | orchestrator | Sunday 08 February 2026 04:19:33 +0000 (0:00:00.070) 0:00:48.592 ******* 2026-02-08 04:19:57.682513 | orchestrator | 2026-02-08 04:19:57.682518 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-08 04:19:57.682522 | orchestrator | Sunday 08 February 2026 04:19:33 +0000 (0:00:00.066) 0:00:48.659 ******* 2026-02-08 04:19:57.682530 | orchestrator | 2026-02-08 04:19:57.682535 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-08 04:19:57.682539 | orchestrator | Sunday 08 February 2026 04:19:33 +0000 (0:00:00.081) 0:00:48.740 ******* 2026-02-08 04:19:57.682543 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:19:57.682547 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:19:57.682551 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:19:57.682555 | orchestrator | 2026-02-08 04:19:57.682559 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:19:57.682565 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-08 04:19:57.682570 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-08 04:19:57.682575 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-08 04:19:57.682579 | orchestrator | 2026-02-08 04:19:57.682583 | orchestrator | 2026-02-08 04:19:57.682587 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:19:57.682591 | orchestrator | Sunday 08 February 2026 04:19:57 +0000 (0:00:24.300) 0:01:13.041 ******* 2026-02-08 04:19:57.682595 | orchestrator | =============================================================================== 2026-02-08 04:19:57.682599 | orchestrator | horizon : Restart horizon container ------------------------------------ 24.30s 2026-02-08 04:19:57.682603 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.47s 2026-02-08 04:19:57.682607 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.58s 2026-02-08 04:19:57.682611 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.19s 2026-02-08 04:19:57.682615 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.97s 2026-02-08 04:19:57.682619 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.89s 2026-02-08 04:19:57.682623 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.86s 2026-02-08 04:19:57.682627 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.86s 2026-02-08 04:19:57.682631 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.70s 2026-02-08 04:19:57.682635 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.57s 2026-02-08 04:19:57.682640 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.16s 2026-02-08 04:19:57.682644 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2026-02-08 04:19:57.682651 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.84s 2026-02-08 04:19:57.682658 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2026-02-08 04:19:58.144424 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-02-08 04:19:58.144508 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-02-08 04:19:58.144518 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.56s 2026-02-08 04:19:58.144525 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2026-02-08 04:19:58.144532 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.54s 2026-02-08 04:19:58.144538 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2026-02-08 04:20:00.821451 | orchestrator | 2026-02-08 04:20:00 | INFO  | Task e92c4b51-f1a9-45da-bdd0-88146b6a466f (skyline) was prepared for execution. 2026-02-08 04:20:00.821542 | orchestrator | 2026-02-08 04:20:00 | INFO  | It takes a moment until task e92c4b51-f1a9-45da-bdd0-88146b6a466f (skyline) has been started and output is visible here. 2026-02-08 04:20:30.470862 | orchestrator | 2026-02-08 04:20:30.470952 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:20:30.470967 | orchestrator | 2026-02-08 04:20:30.470976 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:20:30.470986 | orchestrator | Sunday 08 February 2026 04:20:05 +0000 (0:00:00.283) 0:00:00.283 ******* 2026-02-08 04:20:30.471040 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:20:30.471051 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:20:30.471059 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:20:30.471069 | orchestrator | 2026-02-08 04:20:30.471078 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:20:30.471088 | orchestrator | Sunday 08 February 2026 04:20:05 +0000 (0:00:00.375) 0:00:00.659 ******* 2026-02-08 04:20:30.471097 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-08 04:20:30.471107 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-08 04:20:30.471116 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-08 04:20:30.471125 | orchestrator | 2026-02-08 04:20:30.471142 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-08 04:20:30.471151 | orchestrator | 2026-02-08 04:20:30.471159 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-08 04:20:30.471167 | orchestrator | Sunday 08 February 2026 04:20:06 +0000 (0:00:00.445) 0:00:01.104 ******* 2026-02-08 04:20:30.471176 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:20:30.471185 | orchestrator | 2026-02-08 04:20:30.471194 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-08 04:20:30.471202 | orchestrator | Sunday 08 February 2026 04:20:06 +0000 (0:00:00.514) 0:00:01.618 ******* 2026-02-08 04:20:30.471211 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-08 04:20:30.471219 | orchestrator | 2026-02-08 04:20:30.471228 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-08 04:20:30.471237 | orchestrator | Sunday 08 February 2026 04:20:09 +0000 (0:00:03.224) 0:00:04.843 ******* 2026-02-08 04:20:30.471246 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-08 04:20:30.471255 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-08 04:20:30.471264 | orchestrator | 2026-02-08 04:20:30.471272 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-08 04:20:30.471281 | orchestrator | Sunday 08 February 2026 04:20:15 +0000 (0:00:05.498) 0:00:10.341 ******* 2026-02-08 04:20:30.471290 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:20:30.471300 | orchestrator | 2026-02-08 04:20:30.471309 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-08 04:20:30.471318 | orchestrator | Sunday 08 February 2026 04:20:18 +0000 (0:00:03.079) 0:00:13.420 ******* 2026-02-08 04:20:30.471327 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:20:30.471336 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-08 04:20:30.471345 | orchestrator | 2026-02-08 04:20:30.471354 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-08 04:20:30.471363 | orchestrator | Sunday 08 February 2026 04:20:22 +0000 (0:00:03.899) 0:00:17.320 ******* 2026-02-08 04:20:30.471374 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:20:30.471383 | orchestrator | 2026-02-08 04:20:30.471392 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-08 04:20:30.471398 | orchestrator | Sunday 08 February 2026 04:20:25 +0000 (0:00:03.068) 0:00:20.388 ******* 2026-02-08 04:20:30.471404 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-08 04:20:30.471411 | orchestrator | 2026-02-08 04:20:30.471417 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-08 04:20:30.471444 | orchestrator | Sunday 08 February 2026 04:20:29 +0000 (0:00:03.720) 0:00:24.109 ******* 2026-02-08 04:20:30.471466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:30.471492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:30.471500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:30.471507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:30.471518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:30.471543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:34.289714 | orchestrator | 2026-02-08 04:20:34.289813 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-08 04:20:34.289827 | orchestrator | Sunday 08 February 2026 04:20:30 +0000 (0:00:01.321) 0:00:25.431 ******* 2026-02-08 04:20:34.289837 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:20:34.289845 | orchestrator | 2026-02-08 04:20:34.289853 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-08 04:20:34.289861 | orchestrator | Sunday 08 February 2026 04:20:31 +0000 (0:00:00.793) 0:00:26.224 ******* 2026-02-08 04:20:34.289872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:34.289883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:34.289924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:34.289950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:34.289960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:34.289968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:34.289982 | orchestrator | 2026-02-08 04:20:34.290110 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-08 04:20:34.290132 | orchestrator | Sunday 08 February 2026 04:20:33 +0000 (0:00:02.406) 0:00:28.630 ******* 2026-02-08 04:20:34.290158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-08 04:20:34.290174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-08 04:20:34.290189 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:20:34.290217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-08 04:20:35.698703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-08 04:20:35.698841 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:20:35.698866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-08 04:20:35.698900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-08 04:20:35.698915 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:20:35.698929 | orchestrator | 2026-02-08 04:20:35.698942 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-08 04:20:35.698957 | orchestrator | Sunday 08 February 2026 04:20:34 +0000 (0:00:00.628) 0:00:29.259 ******* 2026-02-08 04:20:35.698971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-08 04:20:35.699062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-08 04:20:35.699094 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:20:35.699109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-08 04:20:35.699130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-08 04:20:35.699145 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:20:35.699160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-08 04:20:35.699184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-08 04:20:44.226221 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:20:44.226393 | orchestrator | 2026-02-08 04:20:44.226419 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-08 04:20:44.226432 | orchestrator | Sunday 08 February 2026 04:20:35 +0000 (0:00:01.401) 0:00:30.661 ******* 2026-02-08 04:20:44.226446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:44.226476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:44.226487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:44.226498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:44.226551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:44.226568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:44.226578 | orchestrator | 2026-02-08 04:20:44.226588 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-08 04:20:44.226598 | orchestrator | Sunday 08 February 2026 04:20:38 +0000 (0:00:02.546) 0:00:33.207 ******* 2026-02-08 04:20:44.226608 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-08 04:20:44.226618 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-08 04:20:44.226627 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-08 04:20:44.226637 | orchestrator | 2026-02-08 04:20:44.226646 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-08 04:20:44.226656 | orchestrator | Sunday 08 February 2026 04:20:39 +0000 (0:00:01.623) 0:00:34.830 ******* 2026-02-08 04:20:44.226665 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-08 04:20:44.226675 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-08 04:20:44.226684 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-08 04:20:44.226693 | orchestrator | 2026-02-08 04:20:44.226703 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-08 04:20:44.226712 | orchestrator | Sunday 08 February 2026 04:20:41 +0000 (0:00:02.096) 0:00:36.927 ******* 2026-02-08 04:20:44.226722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:44.226749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:46.261248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:46.261364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:46.261381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:46.261411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:46.261425 | orchestrator | 2026-02-08 04:20:46.261484 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-08 04:20:46.261505 | orchestrator | Sunday 08 February 2026 04:20:44 +0000 (0:00:02.266) 0:00:39.194 ******* 2026-02-08 04:20:46.261524 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:20:46.261542 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:20:46.261558 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:20:46.261576 | orchestrator | 2026-02-08 04:20:46.261619 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-08 04:20:46.261636 | orchestrator | Sunday 08 February 2026 04:20:44 +0000 (0:00:00.302) 0:00:39.497 ******* 2026-02-08 04:20:46.261663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:46.261675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:46.261695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:46.261705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:20:46.261725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:21:18.781697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-08 04:21:18.781833 | orchestrator | 2026-02-08 04:21:18.781863 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-08 04:21:18.781884 | orchestrator | Sunday 08 February 2026 04:20:46 +0000 (0:00:01.732) 0:00:41.229 ******* 2026-02-08 04:21:18.781934 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:21:18.781955 | orchestrator | 2026-02-08 04:21:18.781973 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-08 04:21:18.782103 | orchestrator | Sunday 08 February 2026 04:20:48 +0000 (0:00:02.207) 0:00:43.437 ******* 2026-02-08 04:21:18.782130 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:21:18.782149 | orchestrator | 2026-02-08 04:21:18.782168 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-08 04:21:18.782186 | orchestrator | Sunday 08 February 2026 04:20:50 +0000 (0:00:02.202) 0:00:45.640 ******* 2026-02-08 04:21:18.782204 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:21:18.782216 | orchestrator | 2026-02-08 04:21:18.782229 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-08 04:21:18.782248 | orchestrator | Sunday 08 February 2026 04:20:58 +0000 (0:00:07.639) 0:00:53.279 ******* 2026-02-08 04:21:18.782266 | orchestrator | 2026-02-08 04:21:18.782283 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-08 04:21:18.782302 | orchestrator | Sunday 08 February 2026 04:20:58 +0000 (0:00:00.072) 0:00:53.352 ******* 2026-02-08 04:21:18.782322 | orchestrator | 2026-02-08 04:21:18.782340 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-08 04:21:18.782359 | orchestrator | Sunday 08 February 2026 04:20:58 +0000 (0:00:00.070) 0:00:53.422 ******* 2026-02-08 04:21:18.782374 | orchestrator | 2026-02-08 04:21:18.782384 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-08 04:21:18.782395 | orchestrator | Sunday 08 February 2026 04:20:58 +0000 (0:00:00.072) 0:00:53.494 ******* 2026-02-08 04:21:18.782406 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:21:18.782417 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:21:18.782428 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:21:18.782438 | orchestrator | 2026-02-08 04:21:18.782449 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-08 04:21:18.782460 | orchestrator | Sunday 08 February 2026 04:21:04 +0000 (0:00:06.069) 0:00:59.564 ******* 2026-02-08 04:21:18.782471 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:21:18.782490 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:21:18.782519 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:21:18.782537 | orchestrator | 2026-02-08 04:21:18.782556 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:21:18.782576 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 04:21:18.782596 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 04:21:18.782612 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 04:21:18.782629 | orchestrator | 2026-02-08 04:21:18.782647 | orchestrator | 2026-02-08 04:21:18.782664 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:21:18.782684 | orchestrator | Sunday 08 February 2026 04:21:18 +0000 (0:00:13.821) 0:01:13.386 ******* 2026-02-08 04:21:18.782701 | orchestrator | =============================================================================== 2026-02-08 04:21:18.782717 | orchestrator | skyline : Restart skyline-console container ---------------------------- 13.82s 2026-02-08 04:21:18.782734 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.64s 2026-02-08 04:21:18.782750 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 6.07s 2026-02-08 04:21:18.782767 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 5.50s 2026-02-08 04:21:18.782783 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.90s 2026-02-08 04:21:18.782800 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.72s 2026-02-08 04:21:18.782837 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.22s 2026-02-08 04:21:18.782856 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.08s 2026-02-08 04:21:18.782900 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.07s 2026-02-08 04:21:18.782918 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.55s 2026-02-08 04:21:18.782936 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.41s 2026-02-08 04:21:18.782964 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.27s 2026-02-08 04:21:18.782984 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.21s 2026-02-08 04:21:18.783049 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.20s 2026-02-08 04:21:18.783067 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.10s 2026-02-08 04:21:18.783086 | orchestrator | skyline : Check skyline container --------------------------------------- 1.73s 2026-02-08 04:21:18.783103 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.62s 2026-02-08 04:21:18.783122 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.40s 2026-02-08 04:21:18.783141 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.32s 2026-02-08 04:21:18.783159 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.79s 2026-02-08 04:21:21.187610 | orchestrator | 2026-02-08 04:21:21 | INFO  | Task 9eefa1d8-79c8-42bc-953c-1a77802e2b56 (glance) was prepared for execution. 2026-02-08 04:21:21.187697 | orchestrator | 2026-02-08 04:21:21 | INFO  | It takes a moment until task 9eefa1d8-79c8-42bc-953c-1a77802e2b56 (glance) has been started and output is visible here. 2026-02-08 04:21:54.621822 | orchestrator | 2026-02-08 04:21:54.621948 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:21:54.621965 | orchestrator | 2026-02-08 04:21:54.621976 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:21:54.621989 | orchestrator | Sunday 08 February 2026 04:21:25 +0000 (0:00:00.275) 0:00:00.275 ******* 2026-02-08 04:21:54.622181 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:21:54.622202 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:21:54.622218 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:21:54.622235 | orchestrator | 2026-02-08 04:21:54.622249 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:21:54.622259 | orchestrator | Sunday 08 February 2026 04:21:25 +0000 (0:00:00.312) 0:00:00.587 ******* 2026-02-08 04:21:54.622269 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-08 04:21:54.622280 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-08 04:21:54.622290 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-08 04:21:54.622299 | orchestrator | 2026-02-08 04:21:54.622309 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-08 04:21:54.622319 | orchestrator | 2026-02-08 04:21:54.622329 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-08 04:21:54.622339 | orchestrator | Sunday 08 February 2026 04:21:26 +0000 (0:00:00.457) 0:00:01.044 ******* 2026-02-08 04:21:54.622348 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:21:54.622359 | orchestrator | 2026-02-08 04:21:54.622369 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-08 04:21:54.622379 | orchestrator | Sunday 08 February 2026 04:21:27 +0000 (0:00:00.597) 0:00:01.642 ******* 2026-02-08 04:21:54.622388 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-08 04:21:54.622398 | orchestrator | 2026-02-08 04:21:54.622408 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-08 04:21:54.622417 | orchestrator | Sunday 08 February 2026 04:21:30 +0000 (0:00:03.270) 0:00:04.912 ******* 2026-02-08 04:21:54.622453 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-08 04:21:54.622464 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-08 04:21:54.622474 | orchestrator | 2026-02-08 04:21:54.622483 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-08 04:21:54.622493 | orchestrator | Sunday 08 February 2026 04:21:36 +0000 (0:00:06.213) 0:00:11.126 ******* 2026-02-08 04:21:54.622502 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:21:54.622512 | orchestrator | 2026-02-08 04:21:54.622522 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-08 04:21:54.622531 | orchestrator | Sunday 08 February 2026 04:21:39 +0000 (0:00:03.036) 0:00:14.162 ******* 2026-02-08 04:21:54.622541 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:21:54.622551 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-08 04:21:54.622561 | orchestrator | 2026-02-08 04:21:54.622571 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-08 04:21:54.622581 | orchestrator | Sunday 08 February 2026 04:21:43 +0000 (0:00:03.856) 0:00:18.018 ******* 2026-02-08 04:21:54.622590 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:21:54.622600 | orchestrator | 2026-02-08 04:21:54.622610 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-08 04:21:54.622619 | orchestrator | Sunday 08 February 2026 04:21:46 +0000 (0:00:03.157) 0:00:21.175 ******* 2026-02-08 04:21:54.622629 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-08 04:21:54.622639 | orchestrator | 2026-02-08 04:21:54.622648 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-08 04:21:54.622658 | orchestrator | Sunday 08 February 2026 04:21:50 +0000 (0:00:03.721) 0:00:24.897 ******* 2026-02-08 04:21:54.622712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:21:54.622728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:21:54.622756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:21:54.622774 | orchestrator | 2026-02-08 04:21:54.622790 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-08 04:21:54.622807 | orchestrator | Sunday 08 February 2026 04:21:53 +0000 (0:00:03.534) 0:00:28.432 ******* 2026-02-08 04:21:54.622823 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:21:54.622841 | orchestrator | 2026-02-08 04:21:54.622867 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-08 04:22:10.434888 | orchestrator | Sunday 08 February 2026 04:21:54 +0000 (0:00:00.766) 0:00:29.199 ******* 2026-02-08 04:22:10.435060 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:22:10.435080 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:22:10.435091 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:22:10.435102 | orchestrator | 2026-02-08 04:22:10.435114 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-08 04:22:10.435147 | orchestrator | Sunday 08 February 2026 04:21:58 +0000 (0:00:03.618) 0:00:32.817 ******* 2026-02-08 04:22:10.435159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-08 04:22:10.435171 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-08 04:22:10.435182 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-08 04:22:10.435193 | orchestrator | 2026-02-08 04:22:10.435204 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-08 04:22:10.435227 | orchestrator | Sunday 08 February 2026 04:21:59 +0000 (0:00:01.578) 0:00:34.395 ******* 2026-02-08 04:22:10.435238 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-08 04:22:10.435249 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-08 04:22:10.435260 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-08 04:22:10.435270 | orchestrator | 2026-02-08 04:22:10.435281 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-08 04:22:10.435292 | orchestrator | Sunday 08 February 2026 04:22:01 +0000 (0:00:01.387) 0:00:35.783 ******* 2026-02-08 04:22:10.435303 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:22:10.435314 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:22:10.435325 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:22:10.435335 | orchestrator | 2026-02-08 04:22:10.435346 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-08 04:22:10.435357 | orchestrator | Sunday 08 February 2026 04:22:01 +0000 (0:00:00.650) 0:00:36.434 ******* 2026-02-08 04:22:10.435367 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:10.435378 | orchestrator | 2026-02-08 04:22:10.435389 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-08 04:22:10.435399 | orchestrator | Sunday 08 February 2026 04:22:01 +0000 (0:00:00.137) 0:00:36.571 ******* 2026-02-08 04:22:10.435410 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:10.435423 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:10.435436 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:10.435450 | orchestrator | 2026-02-08 04:22:10.435462 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-08 04:22:10.435474 | orchestrator | Sunday 08 February 2026 04:22:02 +0000 (0:00:00.301) 0:00:36.872 ******* 2026-02-08 04:22:10.435486 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:22:10.435499 | orchestrator | 2026-02-08 04:22:10.435511 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-08 04:22:10.435523 | orchestrator | Sunday 08 February 2026 04:22:03 +0000 (0:00:00.746) 0:00:37.619 ******* 2026-02-08 04:22:10.435558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:22:10.435605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:22:10.435626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:22:10.435646 | orchestrator | 2026-02-08 04:22:10.435657 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-08 04:22:10.435668 | orchestrator | Sunday 08 February 2026 04:22:07 +0000 (0:00:04.041) 0:00:41.660 ******* 2026-02-08 04:22:10.435689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 04:22:14.136666 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:14.136748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 04:22:14.136759 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:14.136778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 04:22:14.136801 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:14.136807 | orchestrator | 2026-02-08 04:22:14.136814 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-08 04:22:14.136821 | orchestrator | Sunday 08 February 2026 04:22:10 +0000 (0:00:03.353) 0:00:45.013 ******* 2026-02-08 04:22:14.136843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 04:22:14.136851 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:14.136861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 04:22:14.136872 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:14.136884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 04:22:51.109466 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:51.109577 | orchestrator | 2026-02-08 04:22:51.109596 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-08 04:22:51.109610 | orchestrator | Sunday 08 February 2026 04:22:14 +0000 (0:00:03.700) 0:00:48.714 ******* 2026-02-08 04:22:51.109623 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:51.109634 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:51.109646 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:51.109656 | orchestrator | 2026-02-08 04:22:51.109668 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-08 04:22:51.109679 | orchestrator | Sunday 08 February 2026 04:22:17 +0000 (0:00:03.401) 0:00:52.115 ******* 2026-02-08 04:22:51.109711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:22:51.109762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:22:51.109818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:22:51.109844 | orchestrator | 2026-02-08 04:22:51.109855 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-08 04:22:51.110124 | orchestrator | Sunday 08 February 2026 04:22:21 +0000 (0:00:04.212) 0:00:56.328 ******* 2026-02-08 04:22:51.110159 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:22:51.110176 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:22:51.110187 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:22:51.110197 | orchestrator | 2026-02-08 04:22:51.110208 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-08 04:22:51.110219 | orchestrator | Sunday 08 February 2026 04:22:27 +0000 (0:00:05.755) 0:01:02.083 ******* 2026-02-08 04:22:51.110230 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:51.110241 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:51.110251 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:51.110262 | orchestrator | 2026-02-08 04:22:51.110279 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-08 04:22:51.110298 | orchestrator | Sunday 08 February 2026 04:22:31 +0000 (0:00:04.087) 0:01:06.171 ******* 2026-02-08 04:22:51.110314 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:51.110331 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:51.110350 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:51.110369 | orchestrator | 2026-02-08 04:22:51.110388 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-08 04:22:51.110406 | orchestrator | Sunday 08 February 2026 04:22:35 +0000 (0:00:04.017) 0:01:10.188 ******* 2026-02-08 04:22:51.110420 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:51.110431 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:51.110441 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:51.110452 | orchestrator | 2026-02-08 04:22:51.110462 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-08 04:22:51.110473 | orchestrator | Sunday 08 February 2026 04:22:39 +0000 (0:00:03.540) 0:01:13.728 ******* 2026-02-08 04:22:51.110484 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:51.110494 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:51.110505 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:51.110516 | orchestrator | 2026-02-08 04:22:51.110526 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-08 04:22:51.110537 | orchestrator | Sunday 08 February 2026 04:22:42 +0000 (0:00:03.577) 0:01:17.306 ******* 2026-02-08 04:22:51.110548 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:51.110558 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:51.110569 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:51.110580 | orchestrator | 2026-02-08 04:22:51.110590 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-08 04:22:51.110601 | orchestrator | Sunday 08 February 2026 04:22:43 +0000 (0:00:00.558) 0:01:17.865 ******* 2026-02-08 04:22:51.110611 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-08 04:22:51.110623 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:22:51.110634 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-08 04:22:51.110658 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:22:51.110669 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-08 04:22:51.110683 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:22:51.110701 | orchestrator | 2026-02-08 04:22:51.110719 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-08 04:22:51.110738 | orchestrator | Sunday 08 February 2026 04:22:46 +0000 (0:00:03.392) 0:01:21.258 ******* 2026-02-08 04:22:51.110757 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:22:51.110768 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:22:51.110779 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:22:51.110790 | orchestrator | 2026-02-08 04:22:51.110801 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-08 04:22:51.110826 | orchestrator | Sunday 08 February 2026 04:22:51 +0000 (0:00:04.427) 0:01:25.685 ******* 2026-02-08 04:24:03.838864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:24:03.838971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:24:03.839100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 04:24:03.839115 | orchestrator | 2026-02-08 04:24:03.839125 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-08 04:24:03.839136 | orchestrator | Sunday 08 February 2026 04:22:54 +0000 (0:00:03.832) 0:01:29.517 ******* 2026-02-08 04:24:03.839152 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:24:03.839162 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:24:03.839171 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:24:03.839179 | orchestrator | 2026-02-08 04:24:03.839188 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-08 04:24:03.839197 | orchestrator | Sunday 08 February 2026 04:22:55 +0000 (0:00:00.617) 0:01:30.135 ******* 2026-02-08 04:24:03.839206 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:24:03.839215 | orchestrator | 2026-02-08 04:24:03.839223 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-08 04:24:03.839232 | orchestrator | Sunday 08 February 2026 04:22:57 +0000 (0:00:02.059) 0:01:32.194 ******* 2026-02-08 04:24:03.839241 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:24:03.839250 | orchestrator | 2026-02-08 04:24:03.839258 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-08 04:24:03.839267 | orchestrator | Sunday 08 February 2026 04:22:59 +0000 (0:00:02.215) 0:01:34.410 ******* 2026-02-08 04:24:03.839276 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:24:03.839285 | orchestrator | 2026-02-08 04:24:03.839294 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-08 04:24:03.839303 | orchestrator | Sunday 08 February 2026 04:23:01 +0000 (0:00:01.998) 0:01:36.409 ******* 2026-02-08 04:24:03.839312 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:24:03.839321 | orchestrator | 2026-02-08 04:24:03.839329 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-08 04:24:03.839338 | orchestrator | Sunday 08 February 2026 04:23:29 +0000 (0:00:27.863) 0:02:04.272 ******* 2026-02-08 04:24:03.839346 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:24:03.839364 | orchestrator | 2026-02-08 04:24:03.839372 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-08 04:24:03.839384 | orchestrator | Sunday 08 February 2026 04:23:31 +0000 (0:00:02.006) 0:02:06.278 ******* 2026-02-08 04:24:03.839394 | orchestrator | 2026-02-08 04:24:03.839405 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-08 04:24:03.839415 | orchestrator | Sunday 08 February 2026 04:23:31 +0000 (0:00:00.071) 0:02:06.350 ******* 2026-02-08 04:24:03.839426 | orchestrator | 2026-02-08 04:24:03.839436 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-08 04:24:03.839446 | orchestrator | Sunday 08 February 2026 04:23:31 +0000 (0:00:00.097) 0:02:06.447 ******* 2026-02-08 04:24:03.839456 | orchestrator | 2026-02-08 04:24:03.839466 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-08 04:24:03.839477 | orchestrator | Sunday 08 February 2026 04:23:31 +0000 (0:00:00.079) 0:02:06.527 ******* 2026-02-08 04:24:03.839487 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:24:03.839497 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:24:03.839507 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:24:03.839517 | orchestrator | 2026-02-08 04:24:03.839527 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:24:03.839539 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-08 04:24:03.839552 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-08 04:24:03.839562 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-08 04:24:03.839572 | orchestrator | 2026-02-08 04:24:03.839583 | orchestrator | 2026-02-08 04:24:03.839593 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:24:03.839603 | orchestrator | Sunday 08 February 2026 04:24:03 +0000 (0:00:31.873) 0:02:38.400 ******* 2026-02-08 04:24:03.839613 | orchestrator | =============================================================================== 2026-02-08 04:24:03.839623 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.87s 2026-02-08 04:24:03.839633 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.86s 2026-02-08 04:24:03.839643 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.21s 2026-02-08 04:24:03.839661 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.76s 2026-02-08 04:24:04.219786 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.43s 2026-02-08 04:24:04.219916 | orchestrator | glance : Copying over config.json files for services -------------------- 4.21s 2026-02-08 04:24:04.219944 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.09s 2026-02-08 04:24:04.219964 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.04s 2026-02-08 04:24:04.219982 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.02s 2026-02-08 04:24:04.220046 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.86s 2026-02-08 04:24:04.220058 | orchestrator | glance : Check glance containers ---------------------------------------- 3.83s 2026-02-08 04:24:04.220069 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.72s 2026-02-08 04:24:04.220080 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.70s 2026-02-08 04:24:04.220091 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.62s 2026-02-08 04:24:04.220102 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.58s 2026-02-08 04:24:04.220112 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.54s 2026-02-08 04:24:04.220123 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.53s 2026-02-08 04:24:04.220180 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.40s 2026-02-08 04:24:04.220192 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.39s 2026-02-08 04:24:04.220204 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.35s 2026-02-08 04:24:06.658643 | orchestrator | 2026-02-08 04:24:06 | INFO  | Task 5fca5098-bd12-4610-9aa6-ec3db7cee2cd (cinder) was prepared for execution. 2026-02-08 04:24:06.658768 | orchestrator | 2026-02-08 04:24:06 | INFO  | It takes a moment until task 5fca5098-bd12-4610-9aa6-ec3db7cee2cd (cinder) has been started and output is visible here. 2026-02-08 04:24:41.150434 | orchestrator | 2026-02-08 04:24:41.150532 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:24:41.150550 | orchestrator | 2026-02-08 04:24:41.150561 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:24:41.150571 | orchestrator | Sunday 08 February 2026 04:24:11 +0000 (0:00:00.286) 0:00:00.286 ******* 2026-02-08 04:24:41.150581 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:24:41.150591 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:24:41.150600 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:24:41.150609 | orchestrator | 2026-02-08 04:24:41.150618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:24:41.150627 | orchestrator | Sunday 08 February 2026 04:24:11 +0000 (0:00:00.320) 0:00:00.606 ******* 2026-02-08 04:24:41.150636 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-08 04:24:41.150646 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-08 04:24:41.150656 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-08 04:24:41.150666 | orchestrator | 2026-02-08 04:24:41.150675 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-08 04:24:41.150685 | orchestrator | 2026-02-08 04:24:41.150695 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-08 04:24:41.150706 | orchestrator | Sunday 08 February 2026 04:24:11 +0000 (0:00:00.461) 0:00:01.068 ******* 2026-02-08 04:24:41.150716 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:24:41.150727 | orchestrator | 2026-02-08 04:24:41.150738 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-08 04:24:41.150744 | orchestrator | Sunday 08 February 2026 04:24:12 +0000 (0:00:00.612) 0:00:01.680 ******* 2026-02-08 04:24:41.150751 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-08 04:24:41.150757 | orchestrator | 2026-02-08 04:24:41.150763 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-08 04:24:41.150769 | orchestrator | Sunday 08 February 2026 04:24:15 +0000 (0:00:03.232) 0:00:04.913 ******* 2026-02-08 04:24:41.150776 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-08 04:24:41.150782 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-08 04:24:41.150789 | orchestrator | 2026-02-08 04:24:41.150795 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-08 04:24:41.150801 | orchestrator | Sunday 08 February 2026 04:24:21 +0000 (0:00:06.250) 0:00:11.164 ******* 2026-02-08 04:24:41.150807 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:24:41.150813 | orchestrator | 2026-02-08 04:24:41.150819 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-08 04:24:41.150824 | orchestrator | Sunday 08 February 2026 04:24:25 +0000 (0:00:03.093) 0:00:14.257 ******* 2026-02-08 04:24:41.150830 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:24:41.150836 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-08 04:24:41.150842 | orchestrator | 2026-02-08 04:24:41.150865 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-08 04:24:41.150871 | orchestrator | Sunday 08 February 2026 04:24:28 +0000 (0:00:03.922) 0:00:18.180 ******* 2026-02-08 04:24:41.150877 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:24:41.150883 | orchestrator | 2026-02-08 04:24:41.150888 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-08 04:24:41.150894 | orchestrator | Sunday 08 February 2026 04:24:32 +0000 (0:00:03.143) 0:00:21.323 ******* 2026-02-08 04:24:41.150900 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-08 04:24:41.150905 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-08 04:24:41.150911 | orchestrator | 2026-02-08 04:24:41.150917 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-08 04:24:41.150922 | orchestrator | Sunday 08 February 2026 04:24:39 +0000 (0:00:07.055) 0:00:28.379 ******* 2026-02-08 04:24:41.150943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:24:41.150969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:24:41.150977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:24:41.150985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:41.151018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:41.151027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:41.151039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:41.151052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:47.114925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:47.115079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:47.115111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:47.115118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:47.115125 | orchestrator | 2026-02-08 04:24:47.115144 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-08 04:24:47.115152 | orchestrator | Sunday 08 February 2026 04:24:41 +0000 (0:00:02.060) 0:00:30.439 ******* 2026-02-08 04:24:47.115158 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:24:47.115165 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:24:47.115170 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:24:47.115176 | orchestrator | 2026-02-08 04:24:47.115182 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-08 04:24:47.115188 | orchestrator | Sunday 08 February 2026 04:24:41 +0000 (0:00:00.529) 0:00:30.968 ******* 2026-02-08 04:24:47.115195 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:24:47.115201 | orchestrator | 2026-02-08 04:24:47.115207 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-08 04:24:47.115213 | orchestrator | Sunday 08 February 2026 04:24:42 +0000 (0:00:00.572) 0:00:31.541 ******* 2026-02-08 04:24:47.115219 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-08 04:24:47.115226 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-08 04:24:47.115231 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-08 04:24:47.115237 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-08 04:24:47.115243 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-08 04:24:47.115249 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-08 04:24:47.115255 | orchestrator | 2026-02-08 04:24:47.115261 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-08 04:24:47.115267 | orchestrator | Sunday 08 February 2026 04:24:43 +0000 (0:00:01.633) 0:00:33.174 ******* 2026-02-08 04:24:47.115288 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-08 04:24:47.115303 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-08 04:24:47.115310 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-08 04:24:47.115320 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-08 04:24:47.115331 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-08 04:24:57.764767 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-08 04:24:57.764880 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-08 04:24:57.764899 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-08 04:24:57.764929 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-08 04:24:57.764943 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-08 04:24:57.764974 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-08 04:24:57.765077 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-08 04:24:57.765093 | orchestrator | 2026-02-08 04:24:57.765107 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-08 04:24:57.765120 | orchestrator | Sunday 08 February 2026 04:24:47 +0000 (0:00:03.442) 0:00:36.617 ******* 2026-02-08 04:24:57.765131 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-08 04:24:57.765143 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-08 04:24:57.765154 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-08 04:24:57.765165 | orchestrator | 2026-02-08 04:24:57.765176 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-08 04:24:57.765187 | orchestrator | Sunday 08 February 2026 04:24:48 +0000 (0:00:01.580) 0:00:38.197 ******* 2026-02-08 04:24:57.765198 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-08 04:24:57.765209 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-08 04:24:57.765220 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-08 04:24:57.765231 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-08 04:24:57.765241 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-08 04:24:57.765252 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-08 04:24:57.765263 | orchestrator | 2026-02-08 04:24:57.765273 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-08 04:24:57.765285 | orchestrator | Sunday 08 February 2026 04:24:51 +0000 (0:00:02.735) 0:00:40.933 ******* 2026-02-08 04:24:57.765296 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-08 04:24:57.765308 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-08 04:24:57.765319 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-08 04:24:57.765330 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-08 04:24:57.765341 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-08 04:24:57.765358 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-08 04:24:57.765369 | orchestrator | 2026-02-08 04:24:57.765380 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-08 04:24:57.765391 | orchestrator | Sunday 08 February 2026 04:24:52 +0000 (0:00:01.001) 0:00:41.934 ******* 2026-02-08 04:24:57.765402 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:24:57.765421 | orchestrator | 2026-02-08 04:24:57.765432 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-08 04:24:57.765443 | orchestrator | Sunday 08 February 2026 04:24:52 +0000 (0:00:00.126) 0:00:42.061 ******* 2026-02-08 04:24:57.765453 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:24:57.765464 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:24:57.765475 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:24:57.765485 | orchestrator | 2026-02-08 04:24:57.765496 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-08 04:24:57.765507 | orchestrator | Sunday 08 February 2026 04:24:53 +0000 (0:00:00.547) 0:00:42.609 ******* 2026-02-08 04:24:57.765518 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:24:57.765529 | orchestrator | 2026-02-08 04:24:57.765540 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-08 04:24:57.765551 | orchestrator | Sunday 08 February 2026 04:24:54 +0000 (0:00:00.623) 0:00:43.233 ******* 2026-02-08 04:24:57.765572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:24:58.699529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:24:58.699638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:24:58.699673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:58.699711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:58.699725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:58.699760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:58.699775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:58.699788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:58.699808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:58.699830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:58.699844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:24:58.699858 | orchestrator | 2026-02-08 04:24:58.699873 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-08 04:24:58.699888 | orchestrator | Sunday 08 February 2026 04:24:57 +0000 (0:00:03.851) 0:00:47.084 ******* 2026-02-08 04:24:58.699910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 04:24:58.813255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:24:58.813343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 04:24:58.813385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 04:24:58.813393 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:24:58.813402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 04:24:58.813409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:24:58.813433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 04:24:58.813440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 04:24:58.813453 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:24:58.813464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 04:24:58.813469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:24:58.813473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 04:24:58.813477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 04:24:58.813481 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:24:58.813485 | orchestrator | 2026-02-08 04:24:58.813490 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-08 04:24:58.813499 | orchestrator | Sunday 08 February 2026 04:24:58 +0000 (0:00:00.926) 0:00:48.010 ******* 2026-02-08 04:24:59.447617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 04:24:59.447738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:24:59.447750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 04:24:59.447760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 04:24:59.447768 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:24:59.447779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 04:24:59.447803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:24:59.447817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 04:24:59.447829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 04:24:59.447837 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:24:59.447844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 04:24:59.447852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:24:59.447865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 04:25:04.063345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 04:25:04.063449 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:25:04.063461 | orchestrator | 2026-02-08 04:25:04.063469 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-08 04:25:04.063476 | orchestrator | Sunday 08 February 2026 04:24:59 +0000 (0:00:00.953) 0:00:48.964 ******* 2026-02-08 04:25:04.063496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:25:04.063505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:25:04.063511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:25:04.063550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:04.063591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:04.063608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:04.063618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:04.063630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:04.063638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:04.063660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:16.977355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:16.977483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:16.977501 | orchestrator | 2026-02-08 04:25:16.977513 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-08 04:25:16.977525 | orchestrator | Sunday 08 February 2026 04:25:04 +0000 (0:00:04.380) 0:00:53.344 ******* 2026-02-08 04:25:16.977551 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-08 04:25:16.977573 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-08 04:25:16.977583 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-08 04:25:16.977593 | orchestrator | 2026-02-08 04:25:16.977603 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-08 04:25:16.977613 | orchestrator | Sunday 08 February 2026 04:25:06 +0000 (0:00:01.905) 0:00:55.249 ******* 2026-02-08 04:25:16.977625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:25:16.977638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:25:16.977688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:25:16.977706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:16.977718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:16.977728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:16.977738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:16.977757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:16.977776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:19.432292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:19.432372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:19.432379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:19.432399 | orchestrator | 2026-02-08 04:25:19.432404 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-08 04:25:19.432409 | orchestrator | Sunday 08 February 2026 04:25:17 +0000 (0:00:11.020) 0:01:06.269 ******* 2026-02-08 04:25:19.432413 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:25:19.432418 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:25:19.432422 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:25:19.432425 | orchestrator | 2026-02-08 04:25:19.432429 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-08 04:25:19.432433 | orchestrator | Sunday 08 February 2026 04:25:18 +0000 (0:00:01.494) 0:01:07.764 ******* 2026-02-08 04:25:19.432438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 04:25:19.432444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:25:19.432464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 04:25:19.432468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 04:25:19.432472 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:25:19.432476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 04:25:19.432484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:25:19.432488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 04:25:19.432497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 04:25:23.173378 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:25:23.173513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-08 04:25:23.173544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:25:23.173598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 04:25:23.173619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 04:25:23.173641 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:25:23.173660 | orchestrator | 2026-02-08 04:25:23.173680 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-08 04:25:23.173700 | orchestrator | Sunday 08 February 2026 04:25:19 +0000 (0:00:00.954) 0:01:08.718 ******* 2026-02-08 04:25:23.173719 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:25:23.173738 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:25:23.173755 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:25:23.173774 | orchestrator | 2026-02-08 04:25:23.173794 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-08 04:25:23.173813 | orchestrator | Sunday 08 February 2026 04:25:20 +0000 (0:00:00.648) 0:01:09.367 ******* 2026-02-08 04:25:23.173856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:25:23.173878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:25:23.173904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-08 04:25:23.173917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:23.173931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:23.173944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:25:23.173972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:26:52.591074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:26:52.591256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-08 04:26:52.591269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:26:52.591275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:26:52.591280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-08 04:26:52.591284 | orchestrator | 2026-02-08 04:26:52.591290 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-08 04:26:52.591297 | orchestrator | Sunday 08 February 2026 04:25:23 +0000 (0:00:03.094) 0:01:12.461 ******* 2026-02-08 04:26:52.591301 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:26:52.591307 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:26:52.591326 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:26:52.591338 | orchestrator | 2026-02-08 04:26:52.591343 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-08 04:26:52.591347 | orchestrator | Sunday 08 February 2026 04:25:23 +0000 (0:00:00.342) 0:01:12.803 ******* 2026-02-08 04:26:52.591352 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:26:52.591356 | orchestrator | 2026-02-08 04:26:52.591376 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-08 04:26:52.591381 | orchestrator | Sunday 08 February 2026 04:25:25 +0000 (0:00:02.091) 0:01:14.894 ******* 2026-02-08 04:26:52.591385 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:26:52.591389 | orchestrator | 2026-02-08 04:26:52.591395 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-08 04:26:52.591399 | orchestrator | Sunday 08 February 2026 04:25:27 +0000 (0:00:02.202) 0:01:17.096 ******* 2026-02-08 04:26:52.591403 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:26:52.591408 | orchestrator | 2026-02-08 04:26:52.591412 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-08 04:26:52.591419 | orchestrator | Sunday 08 February 2026 04:25:46 +0000 (0:00:18.625) 0:01:35.722 ******* 2026-02-08 04:26:52.591426 | orchestrator | 2026-02-08 04:26:52.591433 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-08 04:26:52.591440 | orchestrator | Sunday 08 February 2026 04:25:46 +0000 (0:00:00.078) 0:01:35.800 ******* 2026-02-08 04:26:52.591447 | orchestrator | 2026-02-08 04:26:52.591454 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-08 04:26:52.591460 | orchestrator | Sunday 08 February 2026 04:25:46 +0000 (0:00:00.072) 0:01:35.873 ******* 2026-02-08 04:26:52.591467 | orchestrator | 2026-02-08 04:26:52.591474 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-08 04:26:52.591480 | orchestrator | Sunday 08 February 2026 04:25:46 +0000 (0:00:00.074) 0:01:35.948 ******* 2026-02-08 04:26:52.591487 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:26:52.591494 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:26:52.591502 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:26:52.591509 | orchestrator | 2026-02-08 04:26:52.591517 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-08 04:26:52.591525 | orchestrator | Sunday 08 February 2026 04:26:10 +0000 (0:00:24.185) 0:02:00.133 ******* 2026-02-08 04:26:52.591532 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:26:52.591540 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:26:52.591545 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:26:52.591550 | orchestrator | 2026-02-08 04:26:52.591555 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-08 04:26:52.591560 | orchestrator | Sunday 08 February 2026 04:26:21 +0000 (0:00:10.300) 0:02:10.434 ******* 2026-02-08 04:26:52.591565 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:26:52.591569 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:26:52.591574 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:26:52.591579 | orchestrator | 2026-02-08 04:26:52.591587 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-08 04:26:52.591594 | orchestrator | Sunday 08 February 2026 04:26:41 +0000 (0:00:20.404) 0:02:30.839 ******* 2026-02-08 04:26:52.591602 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:26:52.591609 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:26:52.591616 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:26:52.591624 | orchestrator | 2026-02-08 04:26:52.591632 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-08 04:26:52.591641 | orchestrator | Sunday 08 February 2026 04:26:52 +0000 (0:00:10.653) 0:02:41.492 ******* 2026-02-08 04:26:52.591648 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:26:52.591655 | orchestrator | 2026-02-08 04:26:52.591663 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:26:52.591671 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-08 04:26:52.591683 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 04:26:52.591688 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 04:26:52.591693 | orchestrator | 2026-02-08 04:26:52.591699 | orchestrator | 2026-02-08 04:26:52.591704 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:26:52.591709 | orchestrator | Sunday 08 February 2026 04:26:52 +0000 (0:00:00.283) 0:02:41.776 ******* 2026-02-08 04:26:52.591714 | orchestrator | =============================================================================== 2026-02-08 04:26:52.591720 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 24.19s 2026-02-08 04:26:52.591725 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 20.40s 2026-02-08 04:26:52.591730 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.63s 2026-02-08 04:26:52.591735 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.02s 2026-02-08 04:26:52.591740 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.65s 2026-02-08 04:26:52.591745 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.30s 2026-02-08 04:26:52.591750 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.06s 2026-02-08 04:26:52.591755 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.25s 2026-02-08 04:26:52.591760 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.38s 2026-02-08 04:26:52.591765 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.92s 2026-02-08 04:26:52.591770 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.85s 2026-02-08 04:26:52.591781 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.44s 2026-02-08 04:26:52.591789 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.23s 2026-02-08 04:26:52.591796 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.14s 2026-02-08 04:26:52.591809 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.09s 2026-02-08 04:26:53.044974 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.09s 2026-02-08 04:26:53.045112 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.74s 2026-02-08 04:26:53.045126 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.20s 2026-02-08 04:26:53.045135 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.09s 2026-02-08 04:26:53.045144 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.06s 2026-02-08 04:26:55.632274 | orchestrator | 2026-02-08 04:26:55 | INFO  | Task db7d440f-4f6c-4fde-a142-f055b4c3cb68 (barbican) was prepared for execution. 2026-02-08 04:26:55.632358 | orchestrator | 2026-02-08 04:26:55 | INFO  | It takes a moment until task db7d440f-4f6c-4fde-a142-f055b4c3cb68 (barbican) has been started and output is visible here. 2026-02-08 04:27:38.090000 | orchestrator | 2026-02-08 04:27:38.090209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:27:38.090225 | orchestrator | 2026-02-08 04:27:38.090235 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:27:38.090243 | orchestrator | Sunday 08 February 2026 04:27:00 +0000 (0:00:00.286) 0:00:00.286 ******* 2026-02-08 04:27:38.090252 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:27:38.090261 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:27:38.090269 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:27:38.090277 | orchestrator | 2026-02-08 04:27:38.090285 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:27:38.090293 | orchestrator | Sunday 08 February 2026 04:27:00 +0000 (0:00:00.336) 0:00:00.623 ******* 2026-02-08 04:27:38.090324 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-08 04:27:38.090334 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-08 04:27:38.090342 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-08 04:27:38.090350 | orchestrator | 2026-02-08 04:27:38.090358 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-08 04:27:38.090366 | orchestrator | 2026-02-08 04:27:38.090374 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-08 04:27:38.090382 | orchestrator | Sunday 08 February 2026 04:27:00 +0000 (0:00:00.502) 0:00:01.126 ******* 2026-02-08 04:27:38.090391 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:27:38.090400 | orchestrator | 2026-02-08 04:27:38.090408 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-08 04:27:38.090416 | orchestrator | Sunday 08 February 2026 04:27:01 +0000 (0:00:00.567) 0:00:01.694 ******* 2026-02-08 04:27:38.090424 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-08 04:27:38.090432 | orchestrator | 2026-02-08 04:27:38.090440 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-08 04:27:38.090448 | orchestrator | Sunday 08 February 2026 04:27:04 +0000 (0:00:03.262) 0:00:04.956 ******* 2026-02-08 04:27:38.090455 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-08 04:27:38.090464 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-08 04:27:38.090472 | orchestrator | 2026-02-08 04:27:38.090479 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-08 04:27:38.090487 | orchestrator | Sunday 08 February 2026 04:27:10 +0000 (0:00:06.087) 0:00:11.043 ******* 2026-02-08 04:27:38.090496 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:27:38.090504 | orchestrator | 2026-02-08 04:27:38.090512 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-08 04:27:38.090520 | orchestrator | Sunday 08 February 2026 04:27:14 +0000 (0:00:03.218) 0:00:14.262 ******* 2026-02-08 04:27:38.090528 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:27:38.090536 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-08 04:27:38.090545 | orchestrator | 2026-02-08 04:27:38.090554 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-08 04:27:38.090564 | orchestrator | Sunday 08 February 2026 04:27:17 +0000 (0:00:03.923) 0:00:18.185 ******* 2026-02-08 04:27:38.090573 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:27:38.090583 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-08 04:27:38.090592 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-08 04:27:38.090602 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-08 04:27:38.090611 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-08 04:27:38.090621 | orchestrator | 2026-02-08 04:27:38.090630 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-08 04:27:38.090640 | orchestrator | Sunday 08 February 2026 04:27:32 +0000 (0:00:14.976) 0:00:33.162 ******* 2026-02-08 04:27:38.090650 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-08 04:27:38.090659 | orchestrator | 2026-02-08 04:27:38.090667 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-08 04:27:38.090675 | orchestrator | Sunday 08 February 2026 04:27:36 +0000 (0:00:03.553) 0:00:36.716 ******* 2026-02-08 04:27:38.090698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:38.090734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:38.090744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:38.090753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:38.090768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:38.090776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:38.090798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:44.102192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:44.102290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:44.102299 | orchestrator | 2026-02-08 04:27:44.102307 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-08 04:27:44.102313 | orchestrator | Sunday 08 February 2026 04:27:38 +0000 (0:00:01.587) 0:00:38.304 ******* 2026-02-08 04:27:44.102319 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-08 04:27:44.102324 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-08 04:27:44.102329 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-08 04:27:44.102334 | orchestrator | 2026-02-08 04:27:44.102339 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-08 04:27:44.102344 | orchestrator | Sunday 08 February 2026 04:27:39 +0000 (0:00:01.193) 0:00:39.498 ******* 2026-02-08 04:27:44.102349 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:27:44.102354 | orchestrator | 2026-02-08 04:27:44.102359 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-08 04:27:44.102364 | orchestrator | Sunday 08 February 2026 04:27:39 +0000 (0:00:00.354) 0:00:39.852 ******* 2026-02-08 04:27:44.102369 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:27:44.102374 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:27:44.102378 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:27:44.102383 | orchestrator | 2026-02-08 04:27:44.102388 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-08 04:27:44.102393 | orchestrator | Sunday 08 February 2026 04:27:39 +0000 (0:00:00.316) 0:00:40.168 ******* 2026-02-08 04:27:44.102417 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:27:44.102423 | orchestrator | 2026-02-08 04:27:44.102427 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-08 04:27:44.102432 | orchestrator | Sunday 08 February 2026 04:27:40 +0000 (0:00:00.642) 0:00:40.810 ******* 2026-02-08 04:27:44.102449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:44.102468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:44.102474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:44.102480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:44.102486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:44.102499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:44.102505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:44.102515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:45.508615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:45.508737 | orchestrator | 2026-02-08 04:27:45.508764 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-08 04:27:45.508782 | orchestrator | Sunday 08 February 2026 04:27:44 +0000 (0:00:03.498) 0:00:44.308 ******* 2026-02-08 04:27:45.508802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 04:27:45.508852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:27:45.508889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:27:45.508901 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:27:45.508913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 04:27:45.508943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:27:45.508954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:27:45.508964 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:27:45.508983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 04:27:45.508998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:27:45.509009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:27:45.509057 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:27:45.509069 | orchestrator | 2026-02-08 04:27:45.509080 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-08 04:27:45.509090 | orchestrator | Sunday 08 February 2026 04:27:44 +0000 (0:00:00.605) 0:00:44.914 ******* 2026-02-08 04:27:45.509109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 04:27:48.949536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:27:48.949657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:27:48.949672 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:27:48.949698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 04:27:48.949708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:27:48.949717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:27:48.949726 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:27:48.949753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 04:27:48.949774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:27:48.949783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:27:48.949792 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:27:48.949801 | orchestrator | 2026-02-08 04:27:48.949811 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-08 04:27:48.949821 | orchestrator | Sunday 08 February 2026 04:27:45 +0000 (0:00:00.811) 0:00:45.725 ******* 2026-02-08 04:27:48.949835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:48.949845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:48.949861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:58.672910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:58.673069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:58.673102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:58.673114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:58.673126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:58.673135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:58.673167 | orchestrator | 2026-02-08 04:27:58.673179 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-08 04:27:58.673190 | orchestrator | Sunday 08 February 2026 04:27:48 +0000 (0:00:03.435) 0:00:49.161 ******* 2026-02-08 04:27:58.673199 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:27:58.673210 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:27:58.673220 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:27:58.673229 | orchestrator | 2026-02-08 04:27:58.673256 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-08 04:27:58.673266 | orchestrator | Sunday 08 February 2026 04:27:50 +0000 (0:00:01.582) 0:00:50.743 ******* 2026-02-08 04:27:58.673276 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:27:58.673285 | orchestrator | 2026-02-08 04:27:58.673294 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-08 04:27:58.673309 | orchestrator | Sunday 08 February 2026 04:27:51 +0000 (0:00:01.018) 0:00:51.761 ******* 2026-02-08 04:27:58.673335 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:27:58.673346 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:27:58.673356 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:27:58.673365 | orchestrator | 2026-02-08 04:27:58.673375 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-08 04:27:58.673385 | orchestrator | Sunday 08 February 2026 04:27:52 +0000 (0:00:00.602) 0:00:52.364 ******* 2026-02-08 04:27:58.673401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:58.673414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:58.673424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:27:58.673451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:59.561838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:59.561916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:59.561963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:59.561977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:59.562006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:27:59.562014 | orchestrator | 2026-02-08 04:27:59.562141 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-08 04:27:59.562152 | orchestrator | Sunday 08 February 2026 04:27:58 +0000 (0:00:06.525) 0:00:58.889 ******* 2026-02-08 04:27:59.562180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 04:27:59.562190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:27:59.562204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:27:59.562210 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:27:59.562216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 04:27:59.562230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:27:59.562236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:27:59.562241 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:27:59.562251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-08 04:28:02.043797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:28:02.043923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:28:02.043941 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:28:02.043955 | orchestrator | 2026-02-08 04:28:02.043969 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-08 04:28:02.044003 | orchestrator | Sunday 08 February 2026 04:27:59 +0000 (0:00:00.887) 0:00:59.777 ******* 2026-02-08 04:28:02.044060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:28:02.044077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:28:02.044119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-08 04:28:02.044146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:28:02.044166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:28:02.044196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:28:02.044215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:28:02.044234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:28:02.044254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:28:02.044273 | orchestrator | 2026-02-08 04:28:02.044292 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-08 04:28:02.044322 | orchestrator | Sunday 08 February 2026 04:28:02 +0000 (0:00:02.478) 0:01:02.255 ******* 2026-02-08 04:28:41.165325 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:28:41.165426 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:28:41.165437 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:28:41.165444 | orchestrator | 2026-02-08 04:28:41.165453 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-08 04:28:41.165461 | orchestrator | Sunday 08 February 2026 04:28:02 +0000 (0:00:00.355) 0:01:02.611 ******* 2026-02-08 04:28:41.165468 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:28:41.165475 | orchestrator | 2026-02-08 04:28:41.165482 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-08 04:28:41.165488 | orchestrator | Sunday 08 February 2026 04:28:04 +0000 (0:00:02.082) 0:01:04.693 ******* 2026-02-08 04:28:41.165495 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:28:41.165502 | orchestrator | 2026-02-08 04:28:41.165508 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-08 04:28:41.165534 | orchestrator | Sunday 08 February 2026 04:28:06 +0000 (0:00:02.284) 0:01:06.978 ******* 2026-02-08 04:28:41.165554 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:28:41.165561 | orchestrator | 2026-02-08 04:28:41.165568 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-08 04:28:41.165575 | orchestrator | Sunday 08 February 2026 04:28:18 +0000 (0:00:11.838) 0:01:18.816 ******* 2026-02-08 04:28:41.165581 | orchestrator | 2026-02-08 04:28:41.165587 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-08 04:28:41.165594 | orchestrator | Sunday 08 February 2026 04:28:18 +0000 (0:00:00.077) 0:01:18.894 ******* 2026-02-08 04:28:41.165599 | orchestrator | 2026-02-08 04:28:41.165606 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-08 04:28:41.165613 | orchestrator | Sunday 08 February 2026 04:28:18 +0000 (0:00:00.080) 0:01:18.974 ******* 2026-02-08 04:28:41.165619 | orchestrator | 2026-02-08 04:28:41.165625 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-08 04:28:41.165632 | orchestrator | Sunday 08 February 2026 04:28:18 +0000 (0:00:00.071) 0:01:19.046 ******* 2026-02-08 04:28:41.165639 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:28:41.165645 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:28:41.165651 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:28:41.165658 | orchestrator | 2026-02-08 04:28:41.165664 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-08 04:28:41.165671 | orchestrator | Sunday 08 February 2026 04:28:30 +0000 (0:00:11.493) 0:01:30.539 ******* 2026-02-08 04:28:41.165677 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:28:41.165684 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:28:41.165690 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:28:41.165697 | orchestrator | 2026-02-08 04:28:41.165703 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-08 04:28:41.165711 | orchestrator | Sunday 08 February 2026 04:28:35 +0000 (0:00:05.009) 0:01:35.549 ******* 2026-02-08 04:28:41.165718 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:28:41.165724 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:28:41.165730 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:28:41.165736 | orchestrator | 2026-02-08 04:28:41.165743 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:28:41.165751 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 04:28:41.165759 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 04:28:41.165766 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 04:28:41.165772 | orchestrator | 2026-02-08 04:28:41.165779 | orchestrator | 2026-02-08 04:28:41.165785 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:28:41.165792 | orchestrator | Sunday 08 February 2026 04:28:40 +0000 (0:00:05.411) 0:01:40.960 ******* 2026-02-08 04:28:41.165799 | orchestrator | =============================================================================== 2026-02-08 04:28:41.165805 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.98s 2026-02-08 04:28:41.165812 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.84s 2026-02-08 04:28:41.165818 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.49s 2026-02-08 04:28:41.165824 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.53s 2026-02-08 04:28:41.165831 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.09s 2026-02-08 04:28:41.165837 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.41s 2026-02-08 04:28:41.165843 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.01s 2026-02-08 04:28:41.165857 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.92s 2026-02-08 04:28:41.165864 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.55s 2026-02-08 04:28:41.165874 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.50s 2026-02-08 04:28:41.165884 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.44s 2026-02-08 04:28:41.165893 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.26s 2026-02-08 04:28:41.165903 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.22s 2026-02-08 04:28:41.165913 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.48s 2026-02-08 04:28:41.165923 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.28s 2026-02-08 04:28:41.165948 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.08s 2026-02-08 04:28:41.165958 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.59s 2026-02-08 04:28:41.165968 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.58s 2026-02-08 04:28:41.165977 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.19s 2026-02-08 04:28:41.165987 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.02s 2026-02-08 04:28:43.768978 | orchestrator | 2026-02-08 04:28:43 | INFO  | Task 625afa21-82be-4304-97a0-8411d847ee6b (designate) was prepared for execution. 2026-02-08 04:28:43.769141 | orchestrator | 2026-02-08 04:28:43 | INFO  | It takes a moment until task 625afa21-82be-4304-97a0-8411d847ee6b (designate) has been started and output is visible here. 2026-02-08 04:29:15.713656 | orchestrator | 2026-02-08 04:29:15.713745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:29:15.713753 | orchestrator | 2026-02-08 04:29:15.713757 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:29:15.713761 | orchestrator | Sunday 08 February 2026 04:28:48 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-02-08 04:29:15.713765 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:29:15.713770 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:29:15.713774 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:29:15.713778 | orchestrator | 2026-02-08 04:29:15.713782 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:29:15.713786 | orchestrator | Sunday 08 February 2026 04:28:48 +0000 (0:00:00.327) 0:00:00.605 ******* 2026-02-08 04:29:15.713790 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-08 04:29:15.713794 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-08 04:29:15.713798 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-08 04:29:15.713802 | orchestrator | 2026-02-08 04:29:15.713806 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-08 04:29:15.713809 | orchestrator | 2026-02-08 04:29:15.713813 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-08 04:29:15.713817 | orchestrator | Sunday 08 February 2026 04:28:49 +0000 (0:00:00.462) 0:00:01.068 ******* 2026-02-08 04:29:15.713821 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:29:15.713826 | orchestrator | 2026-02-08 04:29:15.713829 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-08 04:29:15.713833 | orchestrator | Sunday 08 February 2026 04:28:50 +0000 (0:00:00.587) 0:00:01.656 ******* 2026-02-08 04:29:15.713837 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-08 04:29:15.713841 | orchestrator | 2026-02-08 04:29:15.713844 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-08 04:29:15.713848 | orchestrator | Sunday 08 February 2026 04:28:53 +0000 (0:00:03.307) 0:00:04.964 ******* 2026-02-08 04:29:15.713868 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-08 04:29:15.713872 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-08 04:29:15.713876 | orchestrator | 2026-02-08 04:29:15.713880 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-08 04:29:15.713884 | orchestrator | Sunday 08 February 2026 04:28:59 +0000 (0:00:06.213) 0:00:11.178 ******* 2026-02-08 04:29:15.713887 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:29:15.713891 | orchestrator | 2026-02-08 04:29:15.713895 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-08 04:29:15.713899 | orchestrator | Sunday 08 February 2026 04:29:02 +0000 (0:00:03.120) 0:00:14.298 ******* 2026-02-08 04:29:15.713902 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:29:15.713906 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-08 04:29:15.713910 | orchestrator | 2026-02-08 04:29:15.713914 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-08 04:29:15.713917 | orchestrator | Sunday 08 February 2026 04:29:06 +0000 (0:00:04.117) 0:00:18.416 ******* 2026-02-08 04:29:15.713921 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:29:15.713925 | orchestrator | 2026-02-08 04:29:15.713929 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-08 04:29:15.713932 | orchestrator | Sunday 08 February 2026 04:29:10 +0000 (0:00:03.228) 0:00:21.645 ******* 2026-02-08 04:29:15.713936 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-08 04:29:15.713940 | orchestrator | 2026-02-08 04:29:15.713944 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-08 04:29:15.713948 | orchestrator | Sunday 08 February 2026 04:29:13 +0000 (0:00:03.661) 0:00:25.307 ******* 2026-02-08 04:29:15.713953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:15.713973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:15.713978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:15.713987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:15.713993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:15.713997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:15.714001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:15.714011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:21.802325 | orchestrator | 2026-02-08 04:29:21.802331 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-08 04:29:21.802337 | orchestrator | Sunday 08 February 2026 04:29:16 +0000 (0:00:02.811) 0:00:28.119 ******* 2026-02-08 04:29:21.802343 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:29:21.802348 | orchestrator | 2026-02-08 04:29:21.802354 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-08 04:29:21.802359 | orchestrator | Sunday 08 February 2026 04:29:16 +0000 (0:00:00.148) 0:00:28.267 ******* 2026-02-08 04:29:21.802364 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:29:21.802369 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:29:21.802374 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:29:21.802379 | orchestrator | 2026-02-08 04:29:21.802384 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-08 04:29:21.802389 | orchestrator | Sunday 08 February 2026 04:29:17 +0000 (0:00:00.548) 0:00:28.815 ******* 2026-02-08 04:29:21.802395 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:29:21.802401 | orchestrator | 2026-02-08 04:29:21.802406 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-08 04:29:21.802411 | orchestrator | Sunday 08 February 2026 04:29:17 +0000 (0:00:00.651) 0:00:29.467 ******* 2026-02-08 04:29:21.802420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:21.802435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:23.508269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:23.508390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:23.508689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:24.478486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:24.478557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:24.478564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:24.478569 | orchestrator | 2026-02-08 04:29:24.478574 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-08 04:29:24.478579 | orchestrator | Sunday 08 February 2026 04:29:23 +0000 (0:00:05.672) 0:00:35.139 ******* 2026-02-08 04:29:24.478585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:24.478621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 04:29:24.478636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:24.478642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:24.478646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:24.478651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:29:24.478655 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:29:24.478664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:24.478671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 04:29:24.478675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:24.478681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.267983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.268101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.268129 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:29:25.268137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:25.268155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 04:29:25.268162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.268168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.268186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.268193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.268202 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:29:25.268208 | orchestrator | 2026-02-08 04:29:25.268214 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-08 04:29:25.268220 | orchestrator | Sunday 08 February 2026 04:29:24 +0000 (0:00:01.085) 0:00:36.225 ******* 2026-02-08 04:29:25.268226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:25.268235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 04:29:25.268241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.268250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.649744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.649832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.649865 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:29:25.649876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:25.649898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 04:29:25.649907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.649914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.649937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.649952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.649959 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:29:25.649967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:25.649978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 04:29:25.649985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.649992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:25.650005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:29.868672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:29:29.868781 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:29:29.868797 | orchestrator | 2026-02-08 04:29:29.868808 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-08 04:29:29.868821 | orchestrator | Sunday 08 February 2026 04:29:25 +0000 (0:00:01.058) 0:00:37.284 ******* 2026-02-08 04:29:29.868832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:29.868859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:29.868870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:29.868897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:29.868930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:29.868941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:29.868957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:29.868967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:29.868978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:29.868988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:29.869014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:42.164598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:42.164672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:42.164691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:42.164696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:42.164701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:42.164721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:42.164736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:42.164740 | orchestrator | 2026-02-08 04:29:42.164745 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-08 04:29:42.164750 | orchestrator | Sunday 08 February 2026 04:29:31 +0000 (0:00:06.093) 0:00:43.377 ******* 2026-02-08 04:29:42.164755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:42.164763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:42.164767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:29:42.164778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:42.164788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:50.554708 | orchestrator | 2026-02-08 04:29:50.554714 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-08 04:29:50.554720 | orchestrator | Sunday 08 February 2026 04:29:46 +0000 (0:00:15.058) 0:00:58.436 ******* 2026-02-08 04:29:50.554729 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-08 04:29:54.855808 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-08 04:29:54.855910 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-08 04:29:54.855925 | orchestrator | 2026-02-08 04:29:54.855938 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-08 04:29:54.855950 | orchestrator | Sunday 08 February 2026 04:29:50 +0000 (0:00:03.751) 0:01:02.187 ******* 2026-02-08 04:29:54.855962 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-08 04:29:54.855973 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-08 04:29:54.855984 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-08 04:29:54.855995 | orchestrator | 2026-02-08 04:29:54.856006 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-08 04:29:54.856018 | orchestrator | Sunday 08 February 2026 04:29:52 +0000 (0:00:02.444) 0:01:04.631 ******* 2026-02-08 04:29:54.856097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:54.856139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:54.856151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:54.856182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:54.856196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:54.856208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:54.856226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:54.856245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:54.856257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:54.856269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:54.856289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:57.721899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:57.721986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:57.722010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:57.722097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:57.722103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:57.722109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:57.722126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:57.722130 | orchestrator | 2026-02-08 04:29:57.722136 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-08 04:29:57.722143 | orchestrator | Sunday 08 February 2026 04:29:55 +0000 (0:00:02.962) 0:01:07.593 ******* 2026-02-08 04:29:57.722154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:57.722169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:57.722175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:57.722182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:57.722192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:58.705397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:58.705562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:58.705595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:58.705616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:58.705638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:58.705657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:58.705705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:29:58.705737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:58.705749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:58.705761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:58.705772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:58.705784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:58.705795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:29:58.705807 | orchestrator | 2026-02-08 04:29:58.705820 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-08 04:29:58.705841 | orchestrator | Sunday 08 February 2026 04:29:58 +0000 (0:00:02.741) 0:01:10.335 ******* 2026-02-08 04:29:59.789715 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:29:59.789811 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:29:59.789828 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:29:59.789843 | orchestrator | 2026-02-08 04:29:59.789858 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-08 04:29:59.789873 | orchestrator | Sunday 08 February 2026 04:29:59 +0000 (0:00:00.317) 0:01:10.652 ******* 2026-02-08 04:29:59.789903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:59.789923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 04:29:59.789939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:29:59.789953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 04:29:59.789968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:59.790093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:29:59.790120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:59.790138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:29:59.790153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:59.790169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:29:59.790185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:29:59.790216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:30:03.040106 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:30:03.040195 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:30:03.040219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-08 04:30:03.040230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 04:30:03.040238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 04:30:03.040245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 04:30:03.040253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 04:30:03.040281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:30:03.040300 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:30:03.040307 | orchestrator | 2026-02-08 04:30:03.040313 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-08 04:30:03.040320 | orchestrator | Sunday 08 February 2026 04:29:59 +0000 (0:00:00.897) 0:01:11.549 ******* 2026-02-08 04:30:03.040330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:30:03.040338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:30:03.040344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-08 04:30:03.040350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:30:03.040368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:30:04.922862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-08 04:30:04.922967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.922978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.922985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.922992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.923097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.923125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.923138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.923145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.923152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.923160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.923172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.923179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:30:04.923187 | orchestrator | 2026-02-08 04:30:04.923194 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-08 04:30:04.923203 | orchestrator | Sunday 08 February 2026 04:30:04 +0000 (0:00:04.658) 0:01:16.208 ******* 2026-02-08 04:30:04.923210 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:30:04.923222 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:31:20.270383 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:31:20.270518 | orchestrator | 2026-02-08 04:31:20.270535 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-08 04:31:20.270549 | orchestrator | Sunday 08 February 2026 04:30:04 +0000 (0:00:00.346) 0:01:16.555 ******* 2026-02-08 04:31:20.270561 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-08 04:31:20.270572 | orchestrator | 2026-02-08 04:31:20.270583 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-08 04:31:20.270594 | orchestrator | Sunday 08 February 2026 04:30:07 +0000 (0:00:02.216) 0:01:18.771 ******* 2026-02-08 04:31:20.270605 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-08 04:31:20.270617 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-08 04:31:20.270628 | orchestrator | 2026-02-08 04:31:20.270638 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-08 04:31:20.270649 | orchestrator | Sunday 08 February 2026 04:30:09 +0000 (0:00:02.196) 0:01:20.967 ******* 2026-02-08 04:31:20.270676 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:31:20.270688 | orchestrator | 2026-02-08 04:31:20.270699 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-08 04:31:20.270710 | orchestrator | Sunday 08 February 2026 04:30:24 +0000 (0:00:15.351) 0:01:36.319 ******* 2026-02-08 04:31:20.270721 | orchestrator | 2026-02-08 04:31:20.270732 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-08 04:31:20.270742 | orchestrator | Sunday 08 February 2026 04:30:24 +0000 (0:00:00.074) 0:01:36.394 ******* 2026-02-08 04:31:20.270754 | orchestrator | 2026-02-08 04:31:20.270775 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-08 04:31:20.270794 | orchestrator | Sunday 08 February 2026 04:30:24 +0000 (0:00:00.074) 0:01:36.468 ******* 2026-02-08 04:31:20.270814 | orchestrator | 2026-02-08 04:31:20.270833 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-08 04:31:20.270848 | orchestrator | Sunday 08 February 2026 04:30:24 +0000 (0:00:00.072) 0:01:36.540 ******* 2026-02-08 04:31:20.270863 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:31:20.270882 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:31:20.270901 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:31:20.270952 | orchestrator | 2026-02-08 04:31:20.270974 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-08 04:31:20.270993 | orchestrator | Sunday 08 February 2026 04:30:37 +0000 (0:00:12.820) 0:01:49.361 ******* 2026-02-08 04:31:20.271014 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:31:20.271032 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:31:20.271122 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:31:20.271143 | orchestrator | 2026-02-08 04:31:20.271163 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-08 04:31:20.271184 | orchestrator | Sunday 08 February 2026 04:30:46 +0000 (0:00:08.747) 0:01:58.108 ******* 2026-02-08 04:31:20.271204 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:31:20.271223 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:31:20.271242 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:31:20.271254 | orchestrator | 2026-02-08 04:31:20.271268 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-08 04:31:20.271282 | orchestrator | Sunday 08 February 2026 04:30:52 +0000 (0:00:05.692) 0:02:03.801 ******* 2026-02-08 04:31:20.271293 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:31:20.271304 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:31:20.271315 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:31:20.271325 | orchestrator | 2026-02-08 04:31:20.271336 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-08 04:31:20.271347 | orchestrator | Sunday 08 February 2026 04:30:57 +0000 (0:00:05.794) 0:02:09.596 ******* 2026-02-08 04:31:20.271358 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:31:20.271369 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:31:20.271380 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:31:20.271390 | orchestrator | 2026-02-08 04:31:20.271401 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-08 04:31:20.271419 | orchestrator | Sunday 08 February 2026 04:31:04 +0000 (0:00:06.150) 0:02:15.746 ******* 2026-02-08 04:31:20.271438 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:31:20.271497 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:31:20.271517 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:31:20.271535 | orchestrator | 2026-02-08 04:31:20.271548 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-08 04:31:20.271559 | orchestrator | Sunday 08 February 2026 04:31:12 +0000 (0:00:08.747) 0:02:24.493 ******* 2026-02-08 04:31:20.271570 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:31:20.271581 | orchestrator | 2026-02-08 04:31:20.271591 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:31:20.271604 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 04:31:20.271617 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 04:31:20.271628 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 04:31:20.271638 | orchestrator | 2026-02-08 04:31:20.271649 | orchestrator | 2026-02-08 04:31:20.271660 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:31:20.271671 | orchestrator | Sunday 08 February 2026 04:31:19 +0000 (0:00:06.955) 0:02:31.448 ******* 2026-02-08 04:31:20.271681 | orchestrator | =============================================================================== 2026-02-08 04:31:20.271692 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.35s 2026-02-08 04:31:20.271703 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.06s 2026-02-08 04:31:20.271765 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.82s 2026-02-08 04:31:20.271786 | orchestrator | designate : Restart designate-api container ----------------------------- 8.75s 2026-02-08 04:31:20.271821 | orchestrator | designate : Restart designate-worker container -------------------------- 8.75s 2026-02-08 04:31:20.271839 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.96s 2026-02-08 04:31:20.271851 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.21s 2026-02-08 04:31:20.271862 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.15s 2026-02-08 04:31:20.271872 | orchestrator | designate : Copying over config.json files for services ----------------- 6.09s 2026-02-08 04:31:20.271883 | orchestrator | designate : Restart designate-producer container ------------------------ 5.79s 2026-02-08 04:31:20.271894 | orchestrator | designate : Restart designate-central container ------------------------- 5.69s 2026-02-08 04:31:20.271912 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.67s 2026-02-08 04:31:20.271924 | orchestrator | designate : Check designate containers ---------------------------------- 4.66s 2026-02-08 04:31:20.271934 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.12s 2026-02-08 04:31:20.271945 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.75s 2026-02-08 04:31:20.271956 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.66s 2026-02-08 04:31:20.271967 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.31s 2026-02-08 04:31:20.271978 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.23s 2026-02-08 04:31:20.271988 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.12s 2026-02-08 04:31:20.271999 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.96s 2026-02-08 04:31:22.755678 | orchestrator | 2026-02-08 04:31:22 | INFO  | Task 6fdfdd09-a1a6-419d-851e-d04277f9948f (octavia) was prepared for execution. 2026-02-08 04:31:22.755779 | orchestrator | 2026-02-08 04:31:22 | INFO  | It takes a moment until task 6fdfdd09-a1a6-419d-851e-d04277f9948f (octavia) has been started and output is visible here. 2026-02-08 04:33:24.856783 | orchestrator | 2026-02-08 04:33:24.856902 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:33:24.856921 | orchestrator | 2026-02-08 04:33:24.856934 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:33:24.856945 | orchestrator | Sunday 08 February 2026 04:31:27 +0000 (0:00:00.303) 0:00:00.303 ******* 2026-02-08 04:33:24.856957 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:24.856969 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:33:24.856980 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:33:24.856992 | orchestrator | 2026-02-08 04:33:24.857004 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:33:24.857017 | orchestrator | Sunday 08 February 2026 04:31:27 +0000 (0:00:00.337) 0:00:00.640 ******* 2026-02-08 04:33:24.857029 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-08 04:33:24.857043 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-08 04:33:24.857115 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-08 04:33:24.857130 | orchestrator | 2026-02-08 04:33:24.857141 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-08 04:33:24.857152 | orchestrator | 2026-02-08 04:33:24.857163 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-08 04:33:24.857174 | orchestrator | Sunday 08 February 2026 04:31:28 +0000 (0:00:00.487) 0:00:01.127 ******* 2026-02-08 04:33:24.857188 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:33:24.857200 | orchestrator | 2026-02-08 04:33:24.857210 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-08 04:33:24.857221 | orchestrator | Sunday 08 February 2026 04:31:28 +0000 (0:00:00.588) 0:00:01.716 ******* 2026-02-08 04:33:24.857233 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-08 04:33:24.857273 | orchestrator | 2026-02-08 04:33:24.857286 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-08 04:33:24.857298 | orchestrator | Sunday 08 February 2026 04:31:32 +0000 (0:00:03.293) 0:00:05.010 ******* 2026-02-08 04:33:24.857310 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-08 04:33:24.857322 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-08 04:33:24.857333 | orchestrator | 2026-02-08 04:33:24.857345 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-08 04:33:24.857358 | orchestrator | Sunday 08 February 2026 04:31:38 +0000 (0:00:06.338) 0:00:11.348 ******* 2026-02-08 04:33:24.857370 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:33:24.857383 | orchestrator | 2026-02-08 04:33:24.857394 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-08 04:33:24.857408 | orchestrator | Sunday 08 February 2026 04:31:41 +0000 (0:00:03.125) 0:00:14.473 ******* 2026-02-08 04:33:24.857420 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:33:24.857433 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-08 04:33:24.857463 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-08 04:33:24.857474 | orchestrator | 2026-02-08 04:33:24.857485 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-08 04:33:24.857497 | orchestrator | Sunday 08 February 2026 04:31:49 +0000 (0:00:08.028) 0:00:22.502 ******* 2026-02-08 04:33:24.857509 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:33:24.857521 | orchestrator | 2026-02-08 04:33:24.857532 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-08 04:33:24.857543 | orchestrator | Sunday 08 February 2026 04:31:52 +0000 (0:00:03.164) 0:00:25.666 ******* 2026-02-08 04:33:24.857555 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-08 04:33:24.857567 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-08 04:33:24.857578 | orchestrator | 2026-02-08 04:33:24.857590 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-08 04:33:24.857602 | orchestrator | Sunday 08 February 2026 04:31:59 +0000 (0:00:07.103) 0:00:32.770 ******* 2026-02-08 04:33:24.857615 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-08 04:33:24.857627 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-08 04:33:24.857655 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-08 04:33:24.857668 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-08 04:33:24.857680 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-08 04:33:24.857691 | orchestrator | 2026-02-08 04:33:24.857702 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-08 04:33:24.857712 | orchestrator | Sunday 08 February 2026 04:32:15 +0000 (0:00:15.349) 0:00:48.119 ******* 2026-02-08 04:33:24.857723 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:33:24.857733 | orchestrator | 2026-02-08 04:33:24.857743 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-08 04:33:24.857753 | orchestrator | Sunday 08 February 2026 04:32:15 +0000 (0:00:00.794) 0:00:48.914 ******* 2026-02-08 04:33:24.857764 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.857775 | orchestrator | 2026-02-08 04:33:24.857786 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-08 04:33:24.857797 | orchestrator | Sunday 08 February 2026 04:32:20 +0000 (0:00:04.460) 0:00:53.374 ******* 2026-02-08 04:33:24.857807 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.857818 | orchestrator | 2026-02-08 04:33:24.857828 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-08 04:33:24.857878 | orchestrator | Sunday 08 February 2026 04:32:24 +0000 (0:00:04.004) 0:00:57.379 ******* 2026-02-08 04:33:24.857893 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:24.857922 | orchestrator | 2026-02-08 04:33:24.857933 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-08 04:33:24.857944 | orchestrator | Sunday 08 February 2026 04:32:27 +0000 (0:00:03.013) 0:01:00.392 ******* 2026-02-08 04:33:24.857955 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-08 04:33:24.857966 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-08 04:33:24.857978 | orchestrator | 2026-02-08 04:33:24.857989 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-08 04:33:24.858000 | orchestrator | Sunday 08 February 2026 04:32:36 +0000 (0:00:09.479) 0:01:09.872 ******* 2026-02-08 04:33:24.858013 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-08 04:33:24.858116 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-08 04:33:24.858130 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-08 04:33:24.858143 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-08 04:33:24.858154 | orchestrator | 2026-02-08 04:33:24.858165 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-08 04:33:24.858177 | orchestrator | Sunday 08 February 2026 04:32:52 +0000 (0:00:15.417) 0:01:25.290 ******* 2026-02-08 04:33:24.858187 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.858199 | orchestrator | 2026-02-08 04:33:24.858210 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-08 04:33:24.858221 | orchestrator | Sunday 08 February 2026 04:32:57 +0000 (0:00:04.797) 0:01:30.088 ******* 2026-02-08 04:33:24.858235 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.858246 | orchestrator | 2026-02-08 04:33:24.858256 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-08 04:33:24.858267 | orchestrator | Sunday 08 February 2026 04:33:01 +0000 (0:00:04.799) 0:01:34.887 ******* 2026-02-08 04:33:24.858277 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:33:24.858288 | orchestrator | 2026-02-08 04:33:24.858298 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-08 04:33:24.858309 | orchestrator | Sunday 08 February 2026 04:33:02 +0000 (0:00:00.216) 0:01:35.104 ******* 2026-02-08 04:33:24.858320 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:24.858331 | orchestrator | 2026-02-08 04:33:24.858341 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-08 04:33:24.858352 | orchestrator | Sunday 08 February 2026 04:33:06 +0000 (0:00:04.393) 0:01:39.497 ******* 2026-02-08 04:33:24.858363 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:33:24.858374 | orchestrator | 2026-02-08 04:33:24.858384 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-08 04:33:24.858394 | orchestrator | Sunday 08 February 2026 04:33:07 +0000 (0:00:01.194) 0:01:40.691 ******* 2026-02-08 04:33:24.858405 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.858415 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:33:24.858442 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:33:24.858454 | orchestrator | 2026-02-08 04:33:24.858465 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-08 04:33:24.858476 | orchestrator | Sunday 08 February 2026 04:33:12 +0000 (0:00:05.151) 0:01:45.842 ******* 2026-02-08 04:33:24.858486 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:33:24.858496 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:33:24.858508 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.858533 | orchestrator | 2026-02-08 04:33:24.858544 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-08 04:33:24.858556 | orchestrator | Sunday 08 February 2026 04:33:17 +0000 (0:00:04.378) 0:01:50.221 ******* 2026-02-08 04:33:24.858567 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.858578 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:33:24.858590 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:33:24.858601 | orchestrator | 2026-02-08 04:33:24.858611 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-08 04:33:24.858632 | orchestrator | Sunday 08 February 2026 04:33:18 +0000 (0:00:01.032) 0:01:51.253 ******* 2026-02-08 04:33:24.858643 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:33:24.858655 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:33:24.858666 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:24.858677 | orchestrator | 2026-02-08 04:33:24.858687 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-08 04:33:24.858698 | orchestrator | Sunday 08 February 2026 04:33:20 +0000 (0:00:01.869) 0:01:53.124 ******* 2026-02-08 04:33:24.858709 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:33:24.858720 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:33:24.858732 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.858742 | orchestrator | 2026-02-08 04:33:24.858753 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-08 04:33:24.858766 | orchestrator | Sunday 08 February 2026 04:33:21 +0000 (0:00:01.245) 0:01:54.370 ******* 2026-02-08 04:33:24.858778 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.858789 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:33:24.858800 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:33:24.858813 | orchestrator | 2026-02-08 04:33:24.858825 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-08 04:33:24.858837 | orchestrator | Sunday 08 February 2026 04:33:22 +0000 (0:00:01.225) 0:01:55.595 ******* 2026-02-08 04:33:24.858848 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:33:24.858859 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:33:24.858869 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:24.858880 | orchestrator | 2026-02-08 04:33:24.858908 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-08 04:33:50.426262 | orchestrator | Sunday 08 February 2026 04:33:24 +0000 (0:00:02.247) 0:01:57.842 ******* 2026-02-08 04:33:50.426377 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:33:50.426401 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:33:50.426419 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:33:50.426437 | orchestrator | 2026-02-08 04:33:50.426454 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-08 04:33:50.426471 | orchestrator | Sunday 08 February 2026 04:33:27 +0000 (0:00:02.355) 0:02:00.198 ******* 2026-02-08 04:33:50.426487 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:50.426507 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:33:50.426524 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:33:50.426540 | orchestrator | 2026-02-08 04:33:50.426557 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-08 04:33:50.426575 | orchestrator | Sunday 08 February 2026 04:33:27 +0000 (0:00:00.654) 0:02:00.852 ******* 2026-02-08 04:33:50.426593 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:33:50.426609 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:50.426620 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:33:50.426631 | orchestrator | 2026-02-08 04:33:50.426642 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-08 04:33:50.426653 | orchestrator | Sunday 08 February 2026 04:33:31 +0000 (0:00:03.170) 0:02:04.023 ******* 2026-02-08 04:33:50.426665 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:33:50.426676 | orchestrator | 2026-02-08 04:33:50.426687 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-08 04:33:50.426736 | orchestrator | Sunday 08 February 2026 04:33:31 +0000 (0:00:00.533) 0:02:04.557 ******* 2026-02-08 04:33:50.426757 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:50.426776 | orchestrator | 2026-02-08 04:33:50.426790 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-08 04:33:50.426803 | orchestrator | Sunday 08 February 2026 04:33:34 +0000 (0:00:03.340) 0:02:07.897 ******* 2026-02-08 04:33:50.426821 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:50.426842 | orchestrator | 2026-02-08 04:33:50.426863 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-08 04:33:50.426882 | orchestrator | Sunday 08 February 2026 04:33:37 +0000 (0:00:03.000) 0:02:10.898 ******* 2026-02-08 04:33:50.426902 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-08 04:33:50.426922 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-08 04:33:50.426941 | orchestrator | 2026-02-08 04:33:50.426962 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-08 04:33:50.426982 | orchestrator | Sunday 08 February 2026 04:33:44 +0000 (0:00:06.510) 0:02:17.409 ******* 2026-02-08 04:33:50.426998 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:50.427011 | orchestrator | 2026-02-08 04:33:50.427023 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-08 04:33:50.427035 | orchestrator | Sunday 08 February 2026 04:33:47 +0000 (0:00:03.320) 0:02:20.730 ******* 2026-02-08 04:33:50.427046 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:33:50.427127 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:33:50.427149 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:33:50.427163 | orchestrator | 2026-02-08 04:33:50.427178 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-08 04:33:50.427196 | orchestrator | Sunday 08 February 2026 04:33:48 +0000 (0:00:00.723) 0:02:21.454 ******* 2026-02-08 04:33:50.427239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:33:50.427289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:33:50.427312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:33:50.427346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:33:50.427365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:33:50.427377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:33:50.427395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:50.427411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:50.427444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:52.114749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:52.114895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:52.114912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:52.114926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:33:52.114962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:33:52.114977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:33:52.115037 | orchestrator | 2026-02-08 04:33:52.115132 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-08 04:33:52.115160 | orchestrator | Sunday 08 February 2026 04:33:50 +0000 (0:00:02.417) 0:02:23.871 ******* 2026-02-08 04:33:52.115174 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:33:52.115186 | orchestrator | 2026-02-08 04:33:52.115198 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-08 04:33:52.115209 | orchestrator | Sunday 08 February 2026 04:33:51 +0000 (0:00:00.155) 0:02:24.027 ******* 2026-02-08 04:33:52.115220 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:33:52.115255 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:33:52.115267 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:33:52.115278 | orchestrator | 2026-02-08 04:33:52.115288 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-08 04:33:52.115300 | orchestrator | Sunday 08 February 2026 04:33:51 +0000 (0:00:00.386) 0:02:24.413 ******* 2026-02-08 04:33:52.115313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 04:33:52.115327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 04:33:52.115340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 04:33:52.115361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 04:33:52.115373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:33:52.115396 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:33:52.115417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 04:33:56.892148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 04:33:56.892252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 04:33:56.892260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 04:33:56.892283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:33:56.892289 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:33:56.892295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 04:33:56.892324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 04:33:56.892342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 04:33:56.892347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 04:33:56.892351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:33:56.892355 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:33:56.892360 | orchestrator | 2026-02-08 04:33:56.892365 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-08 04:33:56.892371 | orchestrator | Sunday 08 February 2026 04:33:52 +0000 (0:00:00.797) 0:02:25.211 ******* 2026-02-08 04:33:56.892376 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:33:56.892380 | orchestrator | 2026-02-08 04:33:56.892385 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-08 04:33:56.892388 | orchestrator | Sunday 08 February 2026 04:33:53 +0000 (0:00:00.815) 0:02:26.027 ******* 2026-02-08 04:33:56.892402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:33:56.892408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:33:56.892416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:33:58.556452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:33:58.556555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:33:58.556605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:33:58.556618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:58.556628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:58.556637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:58.556662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:58.556673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:58.556681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:33:58.556722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:33:58.556738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:33:58.556754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:33:58.556769 | orchestrator | 2026-02-08 04:33:58.556787 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-08 04:33:58.556805 | orchestrator | Sunday 08 February 2026 04:33:57 +0000 (0:00:04.876) 0:02:30.903 ******* 2026-02-08 04:33:58.556836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 04:33:58.675613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 04:33:58.675766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 04:33:58.675803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 04:33:58.675817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:33:58.675828 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:33:58.675843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 04:33:58.675855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 04:33:58.675884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 04:33:58.675905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 04:33:58.675922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:33:58.675934 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:33:58.675945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 04:33:58.675955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 04:33:58.675966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 04:33:58.675986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 04:33:59.590227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:33:59.590343 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:33:59.590361 | orchestrator | 2026-02-08 04:33:59.590374 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-08 04:33:59.590386 | orchestrator | Sunday 08 February 2026 04:33:58 +0000 (0:00:00.767) 0:02:31.670 ******* 2026-02-08 04:33:59.590417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 04:33:59.590431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 04:33:59.590444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 04:33:59.590457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 04:33:59.590488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:33:59.590522 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:33:59.590539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 04:33:59.590551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 04:33:59.590563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 04:33:59.590574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 04:33:59.590586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:33:59.590605 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:33:59.590625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 04:34:04.257308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 04:34:04.257410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 04:34:04.257420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 04:34:04.257428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 04:34:04.257435 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:34:04.257442 | orchestrator | 2026-02-08 04:34:04.257448 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-08 04:34:04.257455 | orchestrator | Sunday 08 February 2026 04:34:00 +0000 (0:00:01.468) 0:02:33.138 ******* 2026-02-08 04:34:04.257462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:34:04.257500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:34:04.257512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:34:04.257519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:34:04.257525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:34:04.257532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:34:04.257543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:04.257555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:21.210009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:21.210216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:21.210228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:21.210235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:21.210260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:34:21.210268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:34:21.210307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:34:21.210315 | orchestrator | 2026-02-08 04:34:21.210323 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-08 04:34:21.210332 | orchestrator | Sunday 08 February 2026 04:34:05 +0000 (0:00:05.067) 0:02:38.206 ******* 2026-02-08 04:34:21.210344 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-08 04:34:21.210352 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-08 04:34:21.210358 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-08 04:34:21.210364 | orchestrator | 2026-02-08 04:34:21.210369 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-08 04:34:21.210375 | orchestrator | Sunday 08 February 2026 04:34:06 +0000 (0:00:01.654) 0:02:39.861 ******* 2026-02-08 04:34:21.210382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:34:21.210389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:34:21.210401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:34:21.210414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:34:36.533713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:34:36.533802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:34:36.533827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:36.533835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:36.533868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:36.533876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:36.533895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:36.533905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:36.533912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:34:36.533919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:34:36.533930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:34:36.533936 | orchestrator | 2026-02-08 04:34:36.533943 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-08 04:34:36.533950 | orchestrator | Sunday 08 February 2026 04:34:24 +0000 (0:00:17.672) 0:02:57.533 ******* 2026-02-08 04:34:36.533956 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:34:36.533963 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:34:36.533969 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:34:36.533975 | orchestrator | 2026-02-08 04:34:36.533981 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-08 04:34:36.533987 | orchestrator | Sunday 08 February 2026 04:34:26 +0000 (0:00:01.847) 0:02:59.381 ******* 2026-02-08 04:34:36.533993 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-08 04:34:36.533999 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-08 04:34:36.534005 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-08 04:34:36.534011 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-08 04:34:36.534108 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-08 04:34:36.534123 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-08 04:34:36.534136 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-08 04:34:36.534147 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-08 04:34:36.534156 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-08 04:34:36.534167 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-08 04:34:36.534178 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-08 04:34:36.534188 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-08 04:34:36.534198 | orchestrator | 2026-02-08 04:34:36.534207 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-08 04:34:36.534214 | orchestrator | Sunday 08 February 2026 04:34:31 +0000 (0:00:04.971) 0:03:04.353 ******* 2026-02-08 04:34:36.534220 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-08 04:34:36.534225 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-08 04:34:36.534238 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-08 04:34:44.983323 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-08 04:34:44.983405 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-08 04:34:44.983412 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-08 04:34:44.983417 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-08 04:34:44.983431 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-08 04:34:44.983436 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-08 04:34:44.983440 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-08 04:34:44.983457 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-08 04:34:44.983461 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-08 04:34:44.983465 | orchestrator | 2026-02-08 04:34:44.983470 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-08 04:34:44.983475 | orchestrator | Sunday 08 February 2026 04:34:36 +0000 (0:00:05.168) 0:03:09.521 ******* 2026-02-08 04:34:44.983479 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-08 04:34:44.983483 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-08 04:34:44.983487 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-08 04:34:44.983490 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-08 04:34:44.983494 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-08 04:34:44.983498 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-08 04:34:44.983502 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-08 04:34:44.983505 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-08 04:34:44.983509 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-08 04:34:44.983513 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-08 04:34:44.983517 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-08 04:34:44.983521 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-08 04:34:44.983524 | orchestrator | 2026-02-08 04:34:44.983528 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-08 04:34:44.983532 | orchestrator | Sunday 08 February 2026 04:34:41 +0000 (0:00:05.298) 0:03:14.820 ******* 2026-02-08 04:34:44.983538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:34:44.983545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:34:44.983585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 04:34:44.983597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:34:44.983602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:34:44.983606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-08 04:34:44.983611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:44.983616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:44.983620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-08 04:34:44.983634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:36:11.043243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:36:11.043377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-08 04:36:11.043394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:11.043405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:11.043415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:11.043451 | orchestrator | 2026-02-08 04:36:11.043464 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-08 04:36:11.043475 | orchestrator | Sunday 08 February 2026 04:34:45 +0000 (0:00:04.065) 0:03:18.885 ******* 2026-02-08 04:36:11.043485 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:11.043496 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:11.043506 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:11.043515 | orchestrator | 2026-02-08 04:36:11.043525 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-08 04:36:11.043535 | orchestrator | Sunday 08 February 2026 04:34:46 +0000 (0:00:00.323) 0:03:19.209 ******* 2026-02-08 04:36:11.043545 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.043554 | orchestrator | 2026-02-08 04:36:11.043581 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-08 04:36:11.043600 | orchestrator | Sunday 08 February 2026 04:34:48 +0000 (0:00:02.068) 0:03:21.277 ******* 2026-02-08 04:36:11.043610 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.043620 | orchestrator | 2026-02-08 04:36:11.043629 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-08 04:36:11.043639 | orchestrator | Sunday 08 February 2026 04:34:50 +0000 (0:00:02.038) 0:03:23.315 ******* 2026-02-08 04:36:11.043648 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.043673 | orchestrator | 2026-02-08 04:36:11.043683 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-08 04:36:11.043693 | orchestrator | Sunday 08 February 2026 04:34:52 +0000 (0:00:02.332) 0:03:25.647 ******* 2026-02-08 04:36:11.043722 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.043732 | orchestrator | 2026-02-08 04:36:11.043742 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-08 04:36:11.043752 | orchestrator | Sunday 08 February 2026 04:34:54 +0000 (0:00:02.252) 0:03:27.900 ******* 2026-02-08 04:36:11.043762 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.043772 | orchestrator | 2026-02-08 04:36:11.043781 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-08 04:36:11.043791 | orchestrator | Sunday 08 February 2026 04:35:16 +0000 (0:00:21.563) 0:03:49.464 ******* 2026-02-08 04:36:11.043801 | orchestrator | 2026-02-08 04:36:11.043810 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-08 04:36:11.043820 | orchestrator | Sunday 08 February 2026 04:35:16 +0000 (0:00:00.070) 0:03:49.535 ******* 2026-02-08 04:36:11.043829 | orchestrator | 2026-02-08 04:36:11.043839 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-08 04:36:11.043849 | orchestrator | Sunday 08 February 2026 04:35:16 +0000 (0:00:00.069) 0:03:49.605 ******* 2026-02-08 04:36:11.043858 | orchestrator | 2026-02-08 04:36:11.043868 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-08 04:36:11.043877 | orchestrator | Sunday 08 February 2026 04:35:16 +0000 (0:00:00.072) 0:03:49.677 ******* 2026-02-08 04:36:11.043887 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.043897 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:36:11.043906 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:36:11.043916 | orchestrator | 2026-02-08 04:36:11.043964 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-08 04:36:11.043982 | orchestrator | Sunday 08 February 2026 04:35:33 +0000 (0:00:16.645) 0:04:06.322 ******* 2026-02-08 04:36:11.043994 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.044004 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:36:11.044014 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:36:11.044024 | orchestrator | 2026-02-08 04:36:11.044033 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-08 04:36:11.044043 | orchestrator | Sunday 08 February 2026 04:35:44 +0000 (0:00:11.145) 0:04:17.467 ******* 2026-02-08 04:36:11.044053 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.044062 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:36:11.044072 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:36:11.044092 | orchestrator | 2026-02-08 04:36:11.044102 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-08 04:36:11.044111 | orchestrator | Sunday 08 February 2026 04:35:49 +0000 (0:00:05.106) 0:04:22.574 ******* 2026-02-08 04:36:11.044121 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.044131 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:36:11.044140 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:36:11.044150 | orchestrator | 2026-02-08 04:36:11.044160 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-08 04:36:11.044169 | orchestrator | Sunday 08 February 2026 04:36:00 +0000 (0:00:10.618) 0:04:33.192 ******* 2026-02-08 04:36:11.044179 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:36:11.044189 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:36:11.044198 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:36:11.044208 | orchestrator | 2026-02-08 04:36:11.044217 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:36:11.044229 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 04:36:11.044240 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:36:11.044250 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:36:11.044259 | orchestrator | 2026-02-08 04:36:11.044270 | orchestrator | 2026-02-08 04:36:11.044286 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:36:11.044303 | orchestrator | Sunday 08 February 2026 04:36:11 +0000 (0:00:10.815) 0:04:44.008 ******* 2026-02-08 04:36:11.044319 | orchestrator | =============================================================================== 2026-02-08 04:36:11.044334 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.56s 2026-02-08 04:36:11.044350 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.67s 2026-02-08 04:36:11.044365 | orchestrator | octavia : Restart octavia-api container -------------------------------- 16.65s 2026-02-08 04:36:11.044380 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.42s 2026-02-08 04:36:11.044396 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.35s 2026-02-08 04:36:11.044411 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.15s 2026-02-08 04:36:11.044425 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.82s 2026-02-08 04:36:11.044441 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.62s 2026-02-08 04:36:11.044456 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.48s 2026-02-08 04:36:11.044470 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.03s 2026-02-08 04:36:11.044485 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.10s 2026-02-08 04:36:11.044501 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.51s 2026-02-08 04:36:11.044526 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.34s 2026-02-08 04:36:11.044541 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.30s 2026-02-08 04:36:11.044567 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.17s 2026-02-08 04:36:11.459388 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.15s 2026-02-08 04:36:11.459476 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.11s 2026-02-08 04:36:11.459487 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.07s 2026-02-08 04:36:11.459495 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 4.97s 2026-02-08 04:36:11.459533 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 4.88s 2026-02-08 04:36:14.002626 | orchestrator | 2026-02-08 04:36:14 | INFO  | Task 61a3641b-be9c-4924-9b29-84d282c93aa2 (ceilometer) was prepared for execution. 2026-02-08 04:36:14.002710 | orchestrator | 2026-02-08 04:36:14 | INFO  | It takes a moment until task 61a3641b-be9c-4924-9b29-84d282c93aa2 (ceilometer) has been started and output is visible here. 2026-02-08 04:36:37.470400 | orchestrator | 2026-02-08 04:36:37.470482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:36:37.470490 | orchestrator | 2026-02-08 04:36:37.470495 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:36:37.470499 | orchestrator | Sunday 08 February 2026 04:36:18 +0000 (0:00:00.276) 0:00:00.276 ******* 2026-02-08 04:36:37.470504 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:36:37.470509 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:36:37.470513 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:36:37.470517 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:36:37.470521 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:36:37.470525 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:36:37.470529 | orchestrator | 2026-02-08 04:36:37.470533 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:36:37.470537 | orchestrator | Sunday 08 February 2026 04:36:19 +0000 (0:00:00.766) 0:00:01.043 ******* 2026-02-08 04:36:37.470542 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-08 04:36:37.470546 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-08 04:36:37.470550 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-08 04:36:37.470554 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-08 04:36:37.470558 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-08 04:36:37.470562 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-08 04:36:37.470566 | orchestrator | 2026-02-08 04:36:37.470570 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-08 04:36:37.470574 | orchestrator | 2026-02-08 04:36:37.470578 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-08 04:36:37.470581 | orchestrator | Sunday 08 February 2026 04:36:19 +0000 (0:00:00.684) 0:00:01.727 ******* 2026-02-08 04:36:37.470587 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:36:37.470592 | orchestrator | 2026-02-08 04:36:37.470596 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-08 04:36:37.470600 | orchestrator | Sunday 08 February 2026 04:36:21 +0000 (0:00:01.413) 0:00:03.141 ******* 2026-02-08 04:36:37.470604 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:37.470608 | orchestrator | 2026-02-08 04:36:37.470612 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-08 04:36:37.470616 | orchestrator | Sunday 08 February 2026 04:36:21 +0000 (0:00:00.119) 0:00:03.260 ******* 2026-02-08 04:36:37.470619 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:37.470623 | orchestrator | 2026-02-08 04:36:37.470627 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-08 04:36:37.470631 | orchestrator | Sunday 08 February 2026 04:36:21 +0000 (0:00:00.134) 0:00:03.395 ******* 2026-02-08 04:36:37.470635 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:36:37.470639 | orchestrator | 2026-02-08 04:36:37.470643 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-08 04:36:37.470647 | orchestrator | Sunday 08 February 2026 04:36:24 +0000 (0:00:03.317) 0:00:06.712 ******* 2026-02-08 04:36:37.470651 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:36:37.470655 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-08 04:36:37.470659 | orchestrator | 2026-02-08 04:36:37.470678 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-08 04:36:37.470682 | orchestrator | Sunday 08 February 2026 04:36:28 +0000 (0:00:03.720) 0:00:10.432 ******* 2026-02-08 04:36:37.470686 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:36:37.470690 | orchestrator | 2026-02-08 04:36:37.470694 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-08 04:36:37.470698 | orchestrator | Sunday 08 February 2026 04:36:31 +0000 (0:00:03.163) 0:00:13.596 ******* 2026-02-08 04:36:37.470702 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-08 04:36:37.470705 | orchestrator | 2026-02-08 04:36:37.470709 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-08 04:36:37.470713 | orchestrator | Sunday 08 February 2026 04:36:35 +0000 (0:00:03.895) 0:00:17.491 ******* 2026-02-08 04:36:37.470717 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:37.470721 | orchestrator | 2026-02-08 04:36:37.470725 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-08 04:36:37.470729 | orchestrator | Sunday 08 February 2026 04:36:35 +0000 (0:00:00.139) 0:00:17.630 ******* 2026-02-08 04:36:37.470744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:37.470764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:37.470769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:37.470773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:37.470779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:36:37.470789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:36:37.470796 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:37.470805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:36:43.020503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:43.020655 | orchestrator | 2026-02-08 04:36:43.020686 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-08 04:36:43.020706 | orchestrator | Sunday 08 February 2026 04:36:37 +0000 (0:00:01.598) 0:00:19.229 ******* 2026-02-08 04:36:43.020725 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:36:43.020744 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-08 04:36:43.020762 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-08 04:36:43.020779 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 04:36:43.020799 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 04:36:43.020819 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 04:36:43.020887 | orchestrator | 2026-02-08 04:36:43.020908 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-08 04:36:43.020960 | orchestrator | Sunday 08 February 2026 04:36:39 +0000 (0:00:01.898) 0:00:21.127 ******* 2026-02-08 04:36:43.020980 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:36:43.020998 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:36:43.021017 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:36:43.021035 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:36:43.021054 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:36:43.021072 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:36:43.021090 | orchestrator | 2026-02-08 04:36:43.021111 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-08 04:36:43.021132 | orchestrator | Sunday 08 February 2026 04:36:40 +0000 (0:00:00.663) 0:00:21.791 ******* 2026-02-08 04:36:43.021153 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:43.021172 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:43.021190 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:43.021208 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:43.021227 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:43.021246 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:43.021264 | orchestrator | 2026-02-08 04:36:43.021283 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-08 04:36:43.021303 | orchestrator | Sunday 08 February 2026 04:36:41 +0000 (0:00:01.017) 0:00:22.808 ******* 2026-02-08 04:36:43.021322 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:36:43.021341 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:36:43.021358 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:36:43.021375 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:36:43.021393 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:36:43.021411 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:36:43.021431 | orchestrator | 2026-02-08 04:36:43.021450 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-08 04:36:43.021469 | orchestrator | Sunday 08 February 2026 04:36:41 +0000 (0:00:00.719) 0:00:23.528 ******* 2026-02-08 04:36:43.021509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:43.021532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:43.021553 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:43.021605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:43.021646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:43.021667 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:43.021685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:43.021705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:43.021723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:43.021742 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:43.021760 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:43.021819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:43.021902 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:43.021941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:48.327005 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:48.327205 | orchestrator | 2026-02-08 04:36:48.327225 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-08 04:36:48.327238 | orchestrator | Sunday 08 February 2026 04:36:43 +0000 (0:00:01.259) 0:00:24.787 ******* 2026-02-08 04:36:48.327251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:48.327265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:48.327277 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:48.327288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:48.327315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:48.327326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:48.327402 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:48.327416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:48.327445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:48.327457 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:48.327467 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:48.327477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:48.327487 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:48.327497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:48.327507 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:48.327517 | orchestrator | 2026-02-08 04:36:48.327528 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-08 04:36:48.327542 | orchestrator | Sunday 08 February 2026 04:36:44 +0000 (0:00:01.009) 0:00:25.797 ******* 2026-02-08 04:36:48.327553 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:36:48.327564 | orchestrator | 2026-02-08 04:36:48.327582 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-08 04:36:48.327594 | orchestrator | Sunday 08 February 2026 04:36:44 +0000 (0:00:00.770) 0:00:26.567 ******* 2026-02-08 04:36:48.327605 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:36:48.327617 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:36:48.327642 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:36:48.327658 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:36:48.327678 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:36:48.327701 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:36:48.327718 | orchestrator | 2026-02-08 04:36:48.327734 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-08 04:36:48.327750 | orchestrator | Sunday 08 February 2026 04:36:45 +0000 (0:00:01.032) 0:00:27.600 ******* 2026-02-08 04:36:48.327765 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:36:48.327779 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:36:48.327795 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:36:48.327809 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:36:48.327849 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:36:48.327863 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:36:48.327877 | orchestrator | 2026-02-08 04:36:48.327891 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-08 04:36:48.327907 | orchestrator | Sunday 08 February 2026 04:36:46 +0000 (0:00:00.978) 0:00:28.578 ******* 2026-02-08 04:36:48.327922 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:48.327938 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:48.327954 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:48.327969 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:48.327985 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:48.328001 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:48.328017 | orchestrator | 2026-02-08 04:36:48.328032 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-08 04:36:48.328048 | orchestrator | Sunday 08 February 2026 04:36:47 +0000 (0:00:00.871) 0:00:29.450 ******* 2026-02-08 04:36:48.328063 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:48.328080 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:48.328094 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:48.328109 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:48.328123 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:48.328137 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:48.328152 | orchestrator | 2026-02-08 04:36:53.750208 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-08 04:36:53.750310 | orchestrator | Sunday 08 February 2026 04:36:48 +0000 (0:00:00.648) 0:00:30.099 ******* 2026-02-08 04:36:53.750327 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:36:53.750341 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-08 04:36:53.750353 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 04:36:53.750365 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-08 04:36:53.750377 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 04:36:53.750389 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 04:36:53.750401 | orchestrator | 2026-02-08 04:36:53.750414 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-08 04:36:53.750427 | orchestrator | Sunday 08 February 2026 04:36:49 +0000 (0:00:01.548) 0:00:31.647 ******* 2026-02-08 04:36:53.750442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:53.750458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:53.750494 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:53.750507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:53.750534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:53.750547 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:53.750558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:53.750589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:53.750603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:53.750616 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:53.750628 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:53.750639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:53.750658 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:53.750675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:53.750688 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:53.750699 | orchestrator | 2026-02-08 04:36:53.750710 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-08 04:36:53.750722 | orchestrator | Sunday 08 February 2026 04:36:50 +0000 (0:00:00.907) 0:00:32.555 ******* 2026-02-08 04:36:53.750734 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:53.750746 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:53.750757 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:53.750770 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:53.750782 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:53.750820 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:53.750832 | orchestrator | 2026-02-08 04:36:53.750844 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-08 04:36:53.750855 | orchestrator | Sunday 08 February 2026 04:36:51 +0000 (0:00:00.856) 0:00:33.411 ******* 2026-02-08 04:36:53.750867 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:36:53.750879 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-08 04:36:53.750890 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-08 04:36:53.750902 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 04:36:53.750913 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 04:36:53.750926 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 04:36:53.750938 | orchestrator | 2026-02-08 04:36:53.750950 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-08 04:36:53.750963 | orchestrator | Sunday 08 February 2026 04:36:53 +0000 (0:00:01.515) 0:00:34.926 ******* 2026-02-08 04:36:53.750987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:59.906567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:59.906705 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:59.906722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:59.906731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:59.906738 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:59.906757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:36:59.906764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:36:59.906771 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:59.906852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:59.906861 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:59.906884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:59.906899 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:59.906906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:36:59.906912 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:59.906919 | orchestrator | 2026-02-08 04:36:59.906926 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-08 04:36:59.906933 | orchestrator | Sunday 08 February 2026 04:36:54 +0000 (0:00:01.236) 0:00:36.163 ******* 2026-02-08 04:36:59.906939 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:59.906946 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:59.906952 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:59.906958 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:59.906964 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:59.906970 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:59.906980 | orchestrator | 2026-02-08 04:36:59.906990 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-08 04:36:59.907000 | orchestrator | Sunday 08 February 2026 04:36:55 +0000 (0:00:00.859) 0:00:37.022 ******* 2026-02-08 04:36:59.907010 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:59.907021 | orchestrator | 2026-02-08 04:36:59.907031 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-08 04:36:59.907042 | orchestrator | Sunday 08 February 2026 04:36:55 +0000 (0:00:00.156) 0:00:37.178 ******* 2026-02-08 04:36:59.907053 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:36:59.907063 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:36:59.907074 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:36:59.907084 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:36:59.907126 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:36:59.907138 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:36:59.907149 | orchestrator | 2026-02-08 04:36:59.907160 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-08 04:36:59.907170 | orchestrator | Sunday 08 February 2026 04:36:56 +0000 (0:00:00.689) 0:00:37.868 ******* 2026-02-08 04:36:59.907184 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:36:59.907197 | orchestrator | 2026-02-08 04:36:59.907208 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-08 04:36:59.907220 | orchestrator | Sunday 08 February 2026 04:36:57 +0000 (0:00:01.500) 0:00:39.368 ******* 2026-02-08 04:36:59.907231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:36:59.907262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:00.506378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:00.506451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:00.506460 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:00.506478 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:00.506497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:00.506503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:00.506520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:00.506526 | orchestrator | 2026-02-08 04:37:00.506531 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-08 04:37:00.506537 | orchestrator | Sunday 08 February 2026 04:36:59 +0000 (0:00:02.305) 0:00:41.673 ******* 2026-02-08 04:37:00.506543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:00.506548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:00.506554 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:37:00.506563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:00.506572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:00.506576 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:37:00.506581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:00.506590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:02.503676 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:37:02.503758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:02.503814 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:37:02.503822 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:02.503830 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:37:02.503851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:02.503879 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:37:02.503886 | orchestrator | 2026-02-08 04:37:02.503894 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-08 04:37:02.503902 | orchestrator | Sunday 08 February 2026 04:37:00 +0000 (0:00:00.943) 0:00:42.617 ******* 2026-02-08 04:37:02.503920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:02.503929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:02.503951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:02.503959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:02.503966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:02.503982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:02.503990 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:37:02.504002 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:37:02.504012 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:37:02.504023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:02.504036 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:37:02.504055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:02.504065 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:37:02.504085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:10.313007 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:37:10.313155 | orchestrator | 2026-02-08 04:37:10.313175 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-08 04:37:10.313198 | orchestrator | Sunday 08 February 2026 04:37:02 +0000 (0:00:01.646) 0:00:44.264 ******* 2026-02-08 04:37:10.313253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:10.313299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:10.313313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:10.313329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:10.313345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:10.313382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:10.313399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:10.313430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:10.313447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:10.313461 | orchestrator | 2026-02-08 04:37:10.313476 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-08 04:37:10.313489 | orchestrator | Sunday 08 February 2026 04:37:05 +0000 (0:00:02.551) 0:00:46.816 ******* 2026-02-08 04:37:10.313503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:10.313519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:10.313544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.015896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.016058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.016083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.016101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:20.016117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:20.016132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:20.016147 | orchestrator | 2026-02-08 04:37:20.016164 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-08 04:37:20.016196 | orchestrator | Sunday 08 February 2026 04:37:10 +0000 (0:00:05.267) 0:00:52.083 ******* 2026-02-08 04:37:20.016233 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:37:20.016250 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-08 04:37:20.016265 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-08 04:37:20.016280 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 04:37:20.016295 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 04:37:20.016305 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 04:37:20.016313 | orchestrator | 2026-02-08 04:37:20.016322 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-08 04:37:20.016331 | orchestrator | Sunday 08 February 2026 04:37:11 +0000 (0:00:01.680) 0:00:53.764 ******* 2026-02-08 04:37:20.016340 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:37:20.016348 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:37:20.016357 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:37:20.016367 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:37:20.016377 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:37:20.016387 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:37:20.016398 | orchestrator | 2026-02-08 04:37:20.016408 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-08 04:37:20.016419 | orchestrator | Sunday 08 February 2026 04:37:12 +0000 (0:00:00.692) 0:00:54.457 ******* 2026-02-08 04:37:20.016429 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:37:20.016440 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:37:20.016451 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:37:20.016461 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:37:20.016471 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:37:20.016481 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:37:20.016492 | orchestrator | 2026-02-08 04:37:20.016502 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-08 04:37:20.016519 | orchestrator | Sunday 08 February 2026 04:37:14 +0000 (0:00:01.704) 0:00:56.161 ******* 2026-02-08 04:37:20.016530 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:37:20.016540 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:37:20.016551 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:37:20.016560 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:37:20.016568 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:37:20.016577 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:37:20.016585 | orchestrator | 2026-02-08 04:37:20.016594 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-08 04:37:20.016602 | orchestrator | Sunday 08 February 2026 04:37:15 +0000 (0:00:01.475) 0:00:57.637 ******* 2026-02-08 04:37:20.016611 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:37:20.016619 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-08 04:37:20.016628 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-08 04:37:20.016636 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 04:37:20.016644 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 04:37:20.016653 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 04:37:20.016661 | orchestrator | 2026-02-08 04:37:20.016669 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-08 04:37:20.016678 | orchestrator | Sunday 08 February 2026 04:37:17 +0000 (0:00:01.620) 0:00:59.257 ******* 2026-02-08 04:37:20.016689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.016777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.016796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.016824 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.922304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.922414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:20.922431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:20.922467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:20.922479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:20.922491 | orchestrator | 2026-02-08 04:37:20.922505 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-08 04:37:20.922518 | orchestrator | Sunday 08 February 2026 04:37:20 +0000 (0:00:02.523) 0:01:01.781 ******* 2026-02-08 04:37:20.922530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:20.922582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:20.922603 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:37:20.922624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:20.922646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:20.922677 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:37:20.922698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:20.922748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:20.922760 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:37:20.922772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:20.922783 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:37:20.922808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:24.519741 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:37:24.519846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:24.519892 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:37:24.519906 | orchestrator | 2026-02-08 04:37:24.519917 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-08 04:37:24.519929 | orchestrator | Sunday 08 February 2026 04:37:20 +0000 (0:00:00.911) 0:01:02.693 ******* 2026-02-08 04:37:24.519939 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:37:24.519948 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:37:24.519957 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:37:24.519967 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:37:24.519977 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:37:24.519987 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:37:24.519996 | orchestrator | 2026-02-08 04:37:24.520006 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-08 04:37:24.520016 | orchestrator | Sunday 08 February 2026 04:37:21 +0000 (0:00:00.858) 0:01:03.551 ******* 2026-02-08 04:37:24.520027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:24.520038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:24.520049 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:37:24.520060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:24.520071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:24.520082 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:37:24.520129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-08 04:37:24.520155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 04:37:24.520166 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:37:24.520178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:24.520189 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:37:24.520201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:24.520212 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:37:24.520224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-08 04:37:24.520235 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:37:24.520247 | orchestrator | 2026-02-08 04:37:24.520259 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-08 04:37:24.520269 | orchestrator | Sunday 08 February 2026 04:37:22 +0000 (0:00:00.941) 0:01:04.492 ******* 2026-02-08 04:37:24.520295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:52.390969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:52.391141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:52.391161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:52.391174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:52.391186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:52.391218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:52.391277 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-08 04:37:52.391289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-08 04:37:52.391300 | orchestrator | 2026-02-08 04:37:52.391312 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-08 04:37:52.391324 | orchestrator | Sunday 08 February 2026 04:37:24 +0000 (0:00:01.795) 0:01:06.288 ******* 2026-02-08 04:37:52.391335 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:37:52.391346 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:37:52.391355 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:37:52.391365 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:37:52.391382 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:37:52.391398 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:37:52.391414 | orchestrator | 2026-02-08 04:37:52.391430 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-08 04:37:52.391447 | orchestrator | Sunday 08 February 2026 04:37:25 +0000 (0:00:00.650) 0:01:06.938 ******* 2026-02-08 04:37:52.391465 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:37:52.391482 | orchestrator | 2026-02-08 04:37:52.391500 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-08 04:37:52.391514 | orchestrator | Sunday 08 February 2026 04:37:29 +0000 (0:00:04.783) 0:01:11.722 ******* 2026-02-08 04:37:52.391525 | orchestrator | 2026-02-08 04:37:52.391536 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-08 04:37:52.391547 | orchestrator | Sunday 08 February 2026 04:37:30 +0000 (0:00:00.074) 0:01:11.797 ******* 2026-02-08 04:37:52.391558 | orchestrator | 2026-02-08 04:37:52.391570 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-08 04:37:52.391581 | orchestrator | Sunday 08 February 2026 04:37:30 +0000 (0:00:00.091) 0:01:11.888 ******* 2026-02-08 04:37:52.391592 | orchestrator | 2026-02-08 04:37:52.391648 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-08 04:37:52.391666 | orchestrator | Sunday 08 February 2026 04:37:30 +0000 (0:00:00.292) 0:01:12.180 ******* 2026-02-08 04:37:52.391681 | orchestrator | 2026-02-08 04:37:52.391697 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-08 04:37:52.391712 | orchestrator | Sunday 08 February 2026 04:37:30 +0000 (0:00:00.073) 0:01:12.254 ******* 2026-02-08 04:37:52.391740 | orchestrator | 2026-02-08 04:37:52.391758 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-08 04:37:52.391773 | orchestrator | Sunday 08 February 2026 04:37:30 +0000 (0:00:00.079) 0:01:12.334 ******* 2026-02-08 04:37:52.391789 | orchestrator | 2026-02-08 04:37:52.391804 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-08 04:37:52.391819 | orchestrator | Sunday 08 February 2026 04:37:30 +0000 (0:00:00.076) 0:01:12.410 ******* 2026-02-08 04:37:52.391836 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:37:52.391853 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:37:52.391869 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:37:52.391881 | orchestrator | 2026-02-08 04:37:52.391891 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-08 04:37:52.391901 | orchestrator | Sunday 08 February 2026 04:37:40 +0000 (0:00:10.336) 0:01:22.747 ******* 2026-02-08 04:37:52.391910 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:37:52.391920 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:37:52.391929 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:37:52.391939 | orchestrator | 2026-02-08 04:37:52.391948 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-08 04:37:52.391958 | orchestrator | Sunday 08 February 2026 04:37:45 +0000 (0:00:04.833) 0:01:27.580 ******* 2026-02-08 04:37:52.391968 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:37:52.391977 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:37:52.391987 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:37:52.391996 | orchestrator | 2026-02-08 04:37:52.392006 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:37:52.392017 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-08 04:37:52.392038 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 04:37:52.392063 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 04:37:52.954942 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-08 04:37:52.955079 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-08 04:37:52.955094 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-08 04:37:52.955107 | orchestrator | 2026-02-08 04:37:52.955119 | orchestrator | 2026-02-08 04:37:52.955131 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:37:52.955145 | orchestrator | Sunday 08 February 2026 04:37:52 +0000 (0:00:06.573) 0:01:34.154 ******* 2026-02-08 04:37:52.955157 | orchestrator | =============================================================================== 2026-02-08 04:37:52.955168 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.34s 2026-02-08 04:37:52.955179 | orchestrator | ceilometer : Restart ceilometer-compute container ----------------------- 6.57s 2026-02-08 04:37:52.955189 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.27s 2026-02-08 04:37:52.955200 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 4.83s 2026-02-08 04:37:52.955235 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.78s 2026-02-08 04:37:52.955246 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.90s 2026-02-08 04:37:52.955257 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.72s 2026-02-08 04:37:52.955303 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.32s 2026-02-08 04:37:52.955314 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.16s 2026-02-08 04:37:52.955325 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.55s 2026-02-08 04:37:52.955335 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.52s 2026-02-08 04:37:52.955346 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.31s 2026-02-08 04:37:52.955357 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.90s 2026-02-08 04:37:52.955367 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.80s 2026-02-08 04:37:52.955378 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.70s 2026-02-08 04:37:52.955390 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.68s 2026-02-08 04:37:52.955401 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.65s 2026-02-08 04:37:52.955412 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.62s 2026-02-08 04:37:52.955423 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.60s 2026-02-08 04:37:52.955436 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.55s 2026-02-08 04:37:55.660654 | orchestrator | 2026-02-08 04:37:55 | INFO  | Task e72a9797-caf7-4b31-b431-72d965d926ad (aodh) was prepared for execution. 2026-02-08 04:37:55.660837 | orchestrator | 2026-02-08 04:37:55 | INFO  | It takes a moment until task e72a9797-caf7-4b31-b431-72d965d926ad (aodh) has been started and output is visible here. 2026-02-08 04:38:27.771854 | orchestrator | 2026-02-08 04:38:27.771986 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:38:27.772010 | orchestrator | 2026-02-08 04:38:27.772025 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:38:27.772039 | orchestrator | Sunday 08 February 2026 04:38:00 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-08 04:38:27.772054 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:38:27.772070 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:38:27.772085 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:38:27.772099 | orchestrator | 2026-02-08 04:38:27.772114 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:38:27.772129 | orchestrator | Sunday 08 February 2026 04:38:00 +0000 (0:00:00.379) 0:00:00.659 ******* 2026-02-08 04:38:27.772143 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-08 04:38:27.772158 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-08 04:38:27.772173 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-08 04:38:27.772187 | orchestrator | 2026-02-08 04:38:27.772202 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-08 04:38:27.772216 | orchestrator | 2026-02-08 04:38:27.772230 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-08 04:38:27.772244 | orchestrator | Sunday 08 February 2026 04:38:01 +0000 (0:00:00.473) 0:00:01.133 ******* 2026-02-08 04:38:27.772258 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:38:27.772272 | orchestrator | 2026-02-08 04:38:27.772288 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-08 04:38:27.772321 | orchestrator | Sunday 08 February 2026 04:38:01 +0000 (0:00:00.601) 0:00:01.734 ******* 2026-02-08 04:38:27.772338 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-08 04:38:27.772354 | orchestrator | 2026-02-08 04:38:27.772369 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-08 04:38:27.772384 | orchestrator | Sunday 08 February 2026 04:38:05 +0000 (0:00:03.511) 0:00:05.246 ******* 2026-02-08 04:38:27.772400 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-08 04:38:27.772446 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-08 04:38:27.772463 | orchestrator | 2026-02-08 04:38:27.772478 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-08 04:38:27.772593 | orchestrator | Sunday 08 February 2026 04:38:11 +0000 (0:00:06.443) 0:00:11.690 ******* 2026-02-08 04:38:27.772622 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:38:27.772639 | orchestrator | 2026-02-08 04:38:27.772655 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-08 04:38:27.772671 | orchestrator | Sunday 08 February 2026 04:38:14 +0000 (0:00:03.319) 0:00:15.009 ******* 2026-02-08 04:38:27.772689 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:38:27.772706 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-08 04:38:27.772722 | orchestrator | 2026-02-08 04:38:27.772736 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-08 04:38:27.772745 | orchestrator | Sunday 08 February 2026 04:38:18 +0000 (0:00:03.809) 0:00:18.818 ******* 2026-02-08 04:38:27.772753 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:38:27.772762 | orchestrator | 2026-02-08 04:38:27.772771 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-08 04:38:27.772779 | orchestrator | Sunday 08 February 2026 04:38:21 +0000 (0:00:03.211) 0:00:22.030 ******* 2026-02-08 04:38:27.772788 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-08 04:38:27.772796 | orchestrator | 2026-02-08 04:38:27.772805 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-08 04:38:27.772814 | orchestrator | Sunday 08 February 2026 04:38:25 +0000 (0:00:03.786) 0:00:25.817 ******* 2026-02-08 04:38:27.772827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:27.772863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:27.772881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:27.772906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:27.772917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:27.772926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:27.772935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:27.772951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:29.154704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:29.154800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:29.154806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:29.154810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:29.154815 | orchestrator | 2026-02-08 04:38:29.154820 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-08 04:38:29.154825 | orchestrator | Sunday 08 February 2026 04:38:27 +0000 (0:00:01.992) 0:00:27.810 ******* 2026-02-08 04:38:29.154829 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:38:29.154834 | orchestrator | 2026-02-08 04:38:29.154838 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-08 04:38:29.154842 | orchestrator | Sunday 08 February 2026 04:38:27 +0000 (0:00:00.157) 0:00:27.967 ******* 2026-02-08 04:38:29.154846 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:38:29.154850 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:38:29.154853 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:38:29.154857 | orchestrator | 2026-02-08 04:38:29.154861 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-08 04:38:29.154865 | orchestrator | Sunday 08 February 2026 04:38:28 +0000 (0:00:00.539) 0:00:28.507 ******* 2026-02-08 04:38:29.154869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 04:38:29.154886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 04:38:29.154894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:38:29.154901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 04:38:29.154905 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:38:29.154909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 04:38:29.154913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 04:38:29.154917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:38:29.154924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 04:38:34.163608 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:38:34.163707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 04:38:34.163719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 04:38:34.163728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:38:34.163734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 04:38:34.163740 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:38:34.163746 | orchestrator | 2026-02-08 04:38:34.163753 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-08 04:38:34.163760 | orchestrator | Sunday 08 February 2026 04:38:29 +0000 (0:00:00.689) 0:00:29.196 ******* 2026-02-08 04:38:34.163766 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:38:34.163772 | orchestrator | 2026-02-08 04:38:34.163778 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-08 04:38:34.163784 | orchestrator | Sunday 08 February 2026 04:38:29 +0000 (0:00:00.759) 0:00:29.956 ******* 2026-02-08 04:38:34.163790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:34.163824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:34.163834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:34.163841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:34.163847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:34.163853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:34.163864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:34.163875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:34.831976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:34.832076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:34.832092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:34.832104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:34.832116 | orchestrator | 2026-02-08 04:38:34.832129 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-08 04:38:34.832163 | orchestrator | Sunday 08 February 2026 04:38:34 +0000 (0:00:04.252) 0:00:34.208 ******* 2026-02-08 04:38:34.832177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 04:38:34.832190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 04:38:34.832226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:38:34.832239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 04:38:34.832251 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:38:34.832264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 04:38:34.832276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 04:38:34.832294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:38:34.832335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 04:38:34.832348 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:38:34.832375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 04:38:36.008855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 04:38:36.008939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:38:36.008948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 04:38:36.008979 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:38:36.008988 | orchestrator | 2026-02-08 04:38:36.008996 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-08 04:38:36.009006 | orchestrator | Sunday 08 February 2026 04:38:34 +0000 (0:00:00.663) 0:00:34.872 ******* 2026-02-08 04:38:36.009018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 04:38:36.009031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 04:38:36.009056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:38:36.009087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 04:38:36.009100 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:38:36.009112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 04:38:36.009130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 04:38:36.009137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:38:36.009144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 04:38:36.009151 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:38:36.009168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-08 04:38:40.175356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 04:38:40.175456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 04:38:40.175531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 04:38:40.175541 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:38:40.175549 | orchestrator | 2026-02-08 04:38:40.175570 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-08 04:38:40.175576 | orchestrator | Sunday 08 February 2026 04:38:35 +0000 (0:00:01.173) 0:00:36.045 ******* 2026-02-08 04:38:40.175581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:40.175588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:40.175625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:40.175633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:40.175647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:40.175653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:40.175660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:40.175668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:40.175679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:40.175692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185164 | orchestrator | 2026-02-08 04:38:49.185178 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-08 04:38:49.185193 | orchestrator | Sunday 08 February 2026 04:38:40 +0000 (0:00:04.171) 0:00:40.217 ******* 2026-02-08 04:38:49.185207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:49.185223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:49.185246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:49.185305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:49.185416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:54.413654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:54.413735 | orchestrator | 2026-02-08 04:38:54.413744 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-08 04:38:54.413750 | orchestrator | Sunday 08 February 2026 04:38:49 +0000 (0:00:08.998) 0:00:49.215 ******* 2026-02-08 04:38:54.413754 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:38:54.413759 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:38:54.413763 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:38:54.413767 | orchestrator | 2026-02-08 04:38:54.413772 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-08 04:38:54.413776 | orchestrator | Sunday 08 February 2026 04:38:51 +0000 (0:00:01.850) 0:00:51.066 ******* 2026-02-08 04:38:54.413781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:54.413788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:54.413805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-08 04:38:54.413837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:54.413843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:54.413847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-08 04:38:54.413851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:54.413855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:54.413862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:54.413871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:38:54.413880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:39:50.852676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-08 04:39:50.852762 | orchestrator | 2026-02-08 04:39:50.852770 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-08 04:39:50.852778 | orchestrator | Sunday 08 February 2026 04:38:54 +0000 (0:00:03.387) 0:00:54.453 ******* 2026-02-08 04:39:50.852784 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:39:50.852790 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:39:50.852796 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:39:50.852801 | orchestrator | 2026-02-08 04:39:50.852806 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-08 04:39:50.852812 | orchestrator | Sunday 08 February 2026 04:38:54 +0000 (0:00:00.351) 0:00:54.805 ******* 2026-02-08 04:39:50.852817 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:39:50.852822 | orchestrator | 2026-02-08 04:39:50.852828 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-08 04:39:50.852833 | orchestrator | Sunday 08 February 2026 04:38:56 +0000 (0:00:02.032) 0:00:56.837 ******* 2026-02-08 04:39:50.852838 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:39:50.852843 | orchestrator | 2026-02-08 04:39:50.852848 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-08 04:39:50.852853 | orchestrator | Sunday 08 February 2026 04:38:59 +0000 (0:00:02.285) 0:00:59.123 ******* 2026-02-08 04:39:50.852858 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:39:50.852863 | orchestrator | 2026-02-08 04:39:50.852868 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-08 04:39:50.852873 | orchestrator | Sunday 08 February 2026 04:39:11 +0000 (0:00:12.049) 0:01:11.172 ******* 2026-02-08 04:39:50.852878 | orchestrator | 2026-02-08 04:39:50.852883 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-08 04:39:50.852908 | orchestrator | Sunday 08 February 2026 04:39:11 +0000 (0:00:00.072) 0:01:11.245 ******* 2026-02-08 04:39:50.852913 | orchestrator | 2026-02-08 04:39:50.852918 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-08 04:39:50.852923 | orchestrator | Sunday 08 February 2026 04:39:11 +0000 (0:00:00.078) 0:01:11.324 ******* 2026-02-08 04:39:50.852928 | orchestrator | 2026-02-08 04:39:50.852933 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-08 04:39:50.852938 | orchestrator | Sunday 08 February 2026 04:39:11 +0000 (0:00:00.284) 0:01:11.608 ******* 2026-02-08 04:39:50.852943 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:39:50.852948 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:39:50.852953 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:39:50.852958 | orchestrator | 2026-02-08 04:39:50.852963 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-08 04:39:50.852968 | orchestrator | Sunday 08 February 2026 04:39:21 +0000 (0:00:10.183) 0:01:21.791 ******* 2026-02-08 04:39:50.852973 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:39:50.852978 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:39:50.852983 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:39:50.852989 | orchestrator | 2026-02-08 04:39:50.852993 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-08 04:39:50.852999 | orchestrator | Sunday 08 February 2026 04:39:29 +0000 (0:00:08.182) 0:01:29.974 ******* 2026-02-08 04:39:50.853003 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:39:50.853020 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:39:50.853025 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:39:50.853030 | orchestrator | 2026-02-08 04:39:50.853035 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-08 04:39:50.853040 | orchestrator | Sunday 08 February 2026 04:39:40 +0000 (0:00:10.215) 0:01:40.190 ******* 2026-02-08 04:39:50.853045 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:39:50.853050 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:39:50.853055 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:39:50.853060 | orchestrator | 2026-02-08 04:39:50.853065 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:39:50.853071 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 04:39:50.853078 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:39:50.853083 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:39:50.853088 | orchestrator | 2026-02-08 04:39:50.853093 | orchestrator | 2026-02-08 04:39:50.853098 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:39:50.853103 | orchestrator | Sunday 08 February 2026 04:39:50 +0000 (0:00:10.308) 0:01:50.498 ******* 2026-02-08 04:39:50.853108 | orchestrator | =============================================================================== 2026-02-08 04:39:50.853113 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.05s 2026-02-08 04:39:50.853118 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.31s 2026-02-08 04:39:50.853134 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.22s 2026-02-08 04:39:50.853139 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.18s 2026-02-08 04:39:50.853144 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.00s 2026-02-08 04:39:50.853149 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 8.18s 2026-02-08 04:39:50.853154 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.44s 2026-02-08 04:39:50.853159 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.25s 2026-02-08 04:39:50.853169 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.17s 2026-02-08 04:39:50.853174 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.81s 2026-02-08 04:39:50.853179 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.79s 2026-02-08 04:39:50.853184 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.51s 2026-02-08 04:39:50.853189 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.39s 2026-02-08 04:39:50.853194 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.32s 2026-02-08 04:39:50.853199 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.21s 2026-02-08 04:39:50.853204 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.29s 2026-02-08 04:39:50.853209 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.03s 2026-02-08 04:39:50.853214 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 1.99s 2026-02-08 04:39:50.853219 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.85s 2026-02-08 04:39:50.853224 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.17s 2026-02-08 04:39:53.344929 | orchestrator | 2026-02-08 04:39:53 | INFO  | Task 9eabb849-7b32-4ce7-a99d-eb9d46eabbf6 (kolla-ceph-rgw) was prepared for execution. 2026-02-08 04:39:53.345044 | orchestrator | 2026-02-08 04:39:53 | INFO  | It takes a moment until task 9eabb849-7b32-4ce7-a99d-eb9d46eabbf6 (kolla-ceph-rgw) has been started and output is visible here. 2026-02-08 04:40:31.043002 | orchestrator | 2026-02-08 04:40:31.043082 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:40:31.043090 | orchestrator | 2026-02-08 04:40:31.043096 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:40:31.043101 | orchestrator | Sunday 08 February 2026 04:39:57 +0000 (0:00:00.301) 0:00:00.301 ******* 2026-02-08 04:40:31.043106 | orchestrator | ok: [testbed-manager] 2026-02-08 04:40:31.043112 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:40:31.043117 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:40:31.043121 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:40:31.043125 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:40:31.043130 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:40:31.043134 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:40:31.043139 | orchestrator | 2026-02-08 04:40:31.043143 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:40:31.043148 | orchestrator | Sunday 08 February 2026 04:39:58 +0000 (0:00:00.919) 0:00:01.220 ******* 2026-02-08 04:40:31.043152 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-08 04:40:31.043157 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-08 04:40:31.043161 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-08 04:40:31.043182 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-08 04:40:31.043187 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-08 04:40:31.043203 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-08 04:40:31.043208 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-08 04:40:31.043213 | orchestrator | 2026-02-08 04:40:31.043221 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-08 04:40:31.043228 | orchestrator | 2026-02-08 04:40:31.043234 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-08 04:40:31.043241 | orchestrator | Sunday 08 February 2026 04:39:59 +0000 (0:00:00.835) 0:00:02.055 ******* 2026-02-08 04:40:31.043249 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:40:31.043257 | orchestrator | 2026-02-08 04:40:31.043282 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-08 04:40:31.043288 | orchestrator | Sunday 08 February 2026 04:40:01 +0000 (0:00:01.772) 0:00:03.827 ******* 2026-02-08 04:40:31.043294 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-08 04:40:31.043301 | orchestrator | 2026-02-08 04:40:31.043307 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-08 04:40:31.043314 | orchestrator | Sunday 08 February 2026 04:40:05 +0000 (0:00:03.925) 0:00:07.753 ******* 2026-02-08 04:40:31.043322 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-08 04:40:31.043330 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-08 04:40:31.043336 | orchestrator | 2026-02-08 04:40:31.043343 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-08 04:40:31.043349 | orchestrator | Sunday 08 February 2026 04:40:11 +0000 (0:00:06.550) 0:00:14.304 ******* 2026-02-08 04:40:31.043355 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-08 04:40:31.043361 | orchestrator | 2026-02-08 04:40:31.043370 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-08 04:40:31.043376 | orchestrator | Sunday 08 February 2026 04:40:15 +0000 (0:00:03.201) 0:00:17.505 ******* 2026-02-08 04:40:31.043383 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:40:31.043389 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-08 04:40:31.043396 | orchestrator | 2026-02-08 04:40:31.043402 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-08 04:40:31.043409 | orchestrator | Sunday 08 February 2026 04:40:18 +0000 (0:00:03.915) 0:00:21.420 ******* 2026-02-08 04:40:31.043416 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-08 04:40:31.043423 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-08 04:40:31.043430 | orchestrator | 2026-02-08 04:40:31.043436 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-08 04:40:31.043443 | orchestrator | Sunday 08 February 2026 04:40:25 +0000 (0:00:06.483) 0:00:27.904 ******* 2026-02-08 04:40:31.043449 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-08 04:40:31.043456 | orchestrator | 2026-02-08 04:40:31.043463 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:40:31.043470 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:31.043478 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:31.043485 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:31.043491 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:31.043497 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:31.043521 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:31.043529 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:31.043535 | orchestrator | 2026-02-08 04:40:31.043542 | orchestrator | 2026-02-08 04:40:31.043550 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:40:31.043557 | orchestrator | Sunday 08 February 2026 04:40:30 +0000 (0:00:05.011) 0:00:32.916 ******* 2026-02-08 04:40:31.043563 | orchestrator | =============================================================================== 2026-02-08 04:40:31.043576 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.55s 2026-02-08 04:40:31.043583 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.48s 2026-02-08 04:40:31.043590 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.01s 2026-02-08 04:40:31.043597 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.93s 2026-02-08 04:40:31.043608 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.92s 2026-02-08 04:40:31.043615 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.20s 2026-02-08 04:40:31.043622 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.77s 2026-02-08 04:40:31.043634 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.92s 2026-02-08 04:40:31.043640 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-02-08 04:40:33.600876 | orchestrator | 2026-02-08 04:40:33 | INFO  | Task bfe2a18d-e400-47e8-848c-cff39b16df64 (gnocchi) was prepared for execution. 2026-02-08 04:40:33.600973 | orchestrator | 2026-02-08 04:40:33 | INFO  | It takes a moment until task bfe2a18d-e400-47e8-848c-cff39b16df64 (gnocchi) has been started and output is visible here. 2026-02-08 04:40:39.373446 | orchestrator | 2026-02-08 04:40:39.373538 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:40:39.373554 | orchestrator | 2026-02-08 04:40:39.373564 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:40:39.373574 | orchestrator | Sunday 08 February 2026 04:40:38 +0000 (0:00:00.282) 0:00:00.282 ******* 2026-02-08 04:40:39.373584 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:40:39.373594 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:40:39.373601 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:40:39.373607 | orchestrator | 2026-02-08 04:40:39.373613 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:40:39.373620 | orchestrator | Sunday 08 February 2026 04:40:38 +0000 (0:00:00.345) 0:00:00.628 ******* 2026-02-08 04:40:39.373626 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-08 04:40:39.373632 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-08 04:40:39.373639 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-08 04:40:39.373645 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-08 04:40:39.373651 | orchestrator | 2026-02-08 04:40:39.373657 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-08 04:40:39.373663 | orchestrator | skipping: no hosts matched 2026-02-08 04:40:39.373669 | orchestrator | 2026-02-08 04:40:39.373675 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:40:39.373681 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:39.373690 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:39.373695 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:40:39.373701 | orchestrator | 2026-02-08 04:40:39.373707 | orchestrator | 2026-02-08 04:40:39.373713 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:40:39.373719 | orchestrator | Sunday 08 February 2026 04:40:39 +0000 (0:00:00.428) 0:00:01.057 ******* 2026-02-08 04:40:39.373725 | orchestrator | =============================================================================== 2026-02-08 04:40:39.373730 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-02-08 04:40:39.373736 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-02-08 04:40:42.072754 | orchestrator | 2026-02-08 04:40:42 | INFO  | Task 1139984f-80b0-4602-a3b9-dc6c4983dee7 (manila) was prepared for execution. 2026-02-08 04:40:42.072874 | orchestrator | 2026-02-08 04:40:42 | INFO  | It takes a moment until task 1139984f-80b0-4602-a3b9-dc6c4983dee7 (manila) has been started and output is visible here. 2026-02-08 04:41:23.812641 | orchestrator | 2026-02-08 04:41:23.812735 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:41:23.812746 | orchestrator | 2026-02-08 04:41:23.812754 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:41:23.812763 | orchestrator | Sunday 08 February 2026 04:40:46 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-02-08 04:41:23.812770 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:41:23.812778 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:41:23.812786 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:41:23.812794 | orchestrator | 2026-02-08 04:41:23.812801 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:41:23.812813 | orchestrator | Sunday 08 February 2026 04:40:47 +0000 (0:00:00.364) 0:00:00.655 ******* 2026-02-08 04:41:23.812826 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-08 04:41:23.812837 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-08 04:41:23.812848 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-08 04:41:23.812860 | orchestrator | 2026-02-08 04:41:23.812872 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-08 04:41:23.812886 | orchestrator | 2026-02-08 04:41:23.812899 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-08 04:41:23.812911 | orchestrator | Sunday 08 February 2026 04:40:47 +0000 (0:00:00.488) 0:00:01.144 ******* 2026-02-08 04:41:23.812924 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:41:23.812934 | orchestrator | 2026-02-08 04:41:23.812942 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-08 04:41:23.812949 | orchestrator | Sunday 08 February 2026 04:40:48 +0000 (0:00:00.617) 0:00:01.762 ******* 2026-02-08 04:41:23.812957 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:41:23.812965 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:41:23.812973 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:41:23.812980 | orchestrator | 2026-02-08 04:41:23.812988 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-08 04:41:23.813001 | orchestrator | Sunday 08 February 2026 04:40:48 +0000 (0:00:00.518) 0:00:02.280 ******* 2026-02-08 04:41:23.813031 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-08 04:41:23.813064 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-08 04:41:23.813077 | orchestrator | 2026-02-08 04:41:23.813089 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-08 04:41:23.813101 | orchestrator | Sunday 08 February 2026 04:40:55 +0000 (0:00:06.214) 0:00:08.495 ******* 2026-02-08 04:41:23.813115 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-08 04:41:23.813128 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-08 04:41:23.813140 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-08 04:41:23.813152 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-08 04:41:23.813159 | orchestrator | 2026-02-08 04:41:23.813167 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-08 04:41:23.813174 | orchestrator | Sunday 08 February 2026 04:41:07 +0000 (0:00:12.759) 0:00:21.254 ******* 2026-02-08 04:41:23.813182 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:41:23.813189 | orchestrator | 2026-02-08 04:41:23.813218 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-08 04:41:23.813226 | orchestrator | Sunday 08 February 2026 04:41:11 +0000 (0:00:03.188) 0:00:24.443 ******* 2026-02-08 04:41:23.813233 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:41:23.813240 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-08 04:41:23.813248 | orchestrator | 2026-02-08 04:41:23.813255 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-08 04:41:23.813262 | orchestrator | Sunday 08 February 2026 04:41:14 +0000 (0:00:03.694) 0:00:28.137 ******* 2026-02-08 04:41:23.813269 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:41:23.813276 | orchestrator | 2026-02-08 04:41:23.813284 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-08 04:41:23.813291 | orchestrator | Sunday 08 February 2026 04:41:17 +0000 (0:00:03.190) 0:00:31.328 ******* 2026-02-08 04:41:23.813298 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-08 04:41:23.813305 | orchestrator | 2026-02-08 04:41:23.813312 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-08 04:41:23.813319 | orchestrator | Sunday 08 February 2026 04:41:21 +0000 (0:00:03.756) 0:00:35.085 ******* 2026-02-08 04:41:23.813347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:23.813360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:23.813372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:23.813381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:23.813402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:23.813410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:23.813425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:34.279568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:34.279675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:34.279706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:34.279739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:34.279750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:34.279761 | orchestrator | 2026-02-08 04:41:34.279772 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-08 04:41:34.279784 | orchestrator | Sunday 08 February 2026 04:41:23 +0000 (0:00:02.227) 0:00:37.312 ******* 2026-02-08 04:41:34.279795 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:41:34.279807 | orchestrator | 2026-02-08 04:41:34.279817 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-08 04:41:34.279828 | orchestrator | Sunday 08 February 2026 04:41:24 +0000 (0:00:00.577) 0:00:37.890 ******* 2026-02-08 04:41:34.279839 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:41:34.279851 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:41:34.279861 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:41:34.279872 | orchestrator | 2026-02-08 04:41:34.279883 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-08 04:41:34.279894 | orchestrator | Sunday 08 February 2026 04:41:25 +0000 (0:00:00.950) 0:00:38.840 ******* 2026-02-08 04:41:34.279906 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-08 04:41:34.279948 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-08 04:41:34.279960 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-08 04:41:34.279972 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-08 04:41:34.279983 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-08 04:41:34.279994 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-08 04:41:34.280013 | orchestrator | 2026-02-08 04:41:34.280058 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-08 04:41:34.280073 | orchestrator | Sunday 08 February 2026 04:41:27 +0000 (0:00:01.785) 0:00:40.625 ******* 2026-02-08 04:41:34.280087 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-08 04:41:34.280106 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-08 04:41:34.280120 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-08 04:41:34.280132 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-08 04:41:34.280145 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-08 04:41:34.280158 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-08 04:41:34.280171 | orchestrator | 2026-02-08 04:41:34.280184 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-08 04:41:34.280196 | orchestrator | Sunday 08 February 2026 04:41:28 +0000 (0:00:01.177) 0:00:41.802 ******* 2026-02-08 04:41:34.280210 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-08 04:41:34.280223 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-08 04:41:34.280235 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-08 04:41:34.280246 | orchestrator | 2026-02-08 04:41:34.280257 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-08 04:41:34.280268 | orchestrator | Sunday 08 February 2026 04:41:29 +0000 (0:00:00.717) 0:00:42.520 ******* 2026-02-08 04:41:34.280285 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:41:34.280304 | orchestrator | 2026-02-08 04:41:34.280322 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-08 04:41:34.280342 | orchestrator | Sunday 08 February 2026 04:41:29 +0000 (0:00:00.163) 0:00:42.683 ******* 2026-02-08 04:41:34.280362 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:41:34.280382 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:41:34.280399 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:41:34.280418 | orchestrator | 2026-02-08 04:41:34.280430 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-08 04:41:34.280440 | orchestrator | Sunday 08 February 2026 04:41:29 +0000 (0:00:00.559) 0:00:43.242 ******* 2026-02-08 04:41:34.280451 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:41:34.280462 | orchestrator | 2026-02-08 04:41:34.280472 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-08 04:41:34.280483 | orchestrator | Sunday 08 February 2026 04:41:30 +0000 (0:00:00.629) 0:00:43.872 ******* 2026-02-08 04:41:34.280505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:35.196437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:35.196542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:35.196554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:35.196565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:35.196574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:35.196596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:35.196625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:35.196639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:35.196647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:35.196656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:35.196664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:35.196672 | orchestrator | 2026-02-08 04:41:35.196682 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-08 04:41:35.196691 | orchestrator | Sunday 08 February 2026 04:41:34 +0000 (0:00:03.909) 0:00:47.781 ******* 2026-02-08 04:41:35.196713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 04:41:35.915767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:41:35.915857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:35.915866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 04:41:35.915872 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:41:35.915878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 04:41:35.915884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:41:35.915907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:35.915921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 04:41:35.915925 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:41:35.915932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 04:41:35.915937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:41:35.915941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:35.915945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 04:41:35.915952 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:41:35.915956 | orchestrator | 2026-02-08 04:41:35.915961 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-08 04:41:35.915966 | orchestrator | Sunday 08 February 2026 04:41:35 +0000 (0:00:00.918) 0:00:48.699 ******* 2026-02-08 04:41:35.915975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 04:41:40.339776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:41:40.339868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:40.339881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 04:41:40.339890 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:41:40.339900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 04:41:40.339927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:41:40.339935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:40.339962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 04:41:40.339971 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:41:40.339978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 04:41:40.339986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:41:40.340000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:40.340091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 04:41:40.340103 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:41:40.340110 | orchestrator | 2026-02-08 04:41:40.340119 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-08 04:41:40.340128 | orchestrator | Sunday 08 February 2026 04:41:36 +0000 (0:00:00.933) 0:00:49.633 ******* 2026-02-08 04:41:40.340144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:48.114308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:48.114391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:48.114417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:48.114424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:48.114429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:48.114444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:48.114454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:48.114460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:48.114469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:48.114474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:48.114479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:48.114484 | orchestrator | 2026-02-08 04:41:48.114490 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-08 04:41:48.114496 | orchestrator | Sunday 08 February 2026 04:41:40 +0000 (0:00:04.561) 0:00:54.195 ******* 2026-02-08 04:41:48.114507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:53.276600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:53.276695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:41:53.276704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:53.276712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:53.276717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:53.276745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:53.276751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:53.276761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:53.276768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:53.276774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:53.276780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:41:53.276785 | orchestrator | 2026-02-08 04:41:53.276792 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-08 04:41:53.276798 | orchestrator | Sunday 08 February 2026 04:41:48 +0000 (0:00:07.435) 0:01:01.631 ******* 2026-02-08 04:41:53.276804 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-08 04:41:53.276809 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-08 04:41:53.276814 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-08 04:41:53.276819 | orchestrator | 2026-02-08 04:41:53.276825 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-08 04:41:53.276830 | orchestrator | Sunday 08 February 2026 04:41:52 +0000 (0:00:04.364) 0:01:05.995 ******* 2026-02-08 04:41:53.276843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 04:41:56.654162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:41:56.654287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:56.654307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 04:41:56.654320 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:41:56.654333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 04:41:56.654345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:41:56.654373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:56.654435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 04:41:56.654456 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:41:56.654474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-08 04:41:56.654492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 04:41:56.654511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 04:41:56.654529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 04:41:56.654560 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:41:56.654573 | orchestrator | 2026-02-08 04:41:56.654584 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-08 04:41:56.654602 | orchestrator | Sunday 08 February 2026 04:41:53 +0000 (0:00:00.788) 0:01:06.783 ******* 2026-02-08 04:41:56.654625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:42:35.144100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:42:35.144214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-08 04:42:35.144233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:42:35.144247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:42:35.144298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-08 04:42:35.144329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:42:35.144343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:42:35.144354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-08 04:42:35.144365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:42:35.144377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:42:35.144401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-08 04:42:35.144413 | orchestrator | 2026-02-08 04:42:35.144427 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-08 04:42:35.144439 | orchestrator | Sunday 08 February 2026 04:41:56 +0000 (0:00:03.409) 0:01:10.192 ******* 2026-02-08 04:42:35.144450 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:42:35.144462 | orchestrator | 2026-02-08 04:42:35.144474 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-08 04:42:35.144484 | orchestrator | Sunday 08 February 2026 04:41:58 +0000 (0:00:02.068) 0:01:12.261 ******* 2026-02-08 04:42:35.144495 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:42:35.144506 | orchestrator | 2026-02-08 04:42:35.144517 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-08 04:42:35.144527 | orchestrator | Sunday 08 February 2026 04:42:01 +0000 (0:00:02.213) 0:01:14.474 ******* 2026-02-08 04:42:35.144538 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:42:35.144549 | orchestrator | 2026-02-08 04:42:35.144559 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-08 04:42:35.144570 | orchestrator | Sunday 08 February 2026 04:42:34 +0000 (0:00:33.818) 0:01:48.293 ******* 2026-02-08 04:42:35.144581 | orchestrator | 2026-02-08 04:42:35.144599 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-08 04:43:30.169554 | orchestrator | Sunday 08 February 2026 04:42:34 +0000 (0:00:00.073) 0:01:48.366 ******* 2026-02-08 04:43:30.169637 | orchestrator | 2026-02-08 04:43:30.169646 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-08 04:43:30.169652 | orchestrator | Sunday 08 February 2026 04:42:35 +0000 (0:00:00.076) 0:01:48.443 ******* 2026-02-08 04:43:30.169658 | orchestrator | 2026-02-08 04:43:30.169664 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-08 04:43:30.169670 | orchestrator | Sunday 08 February 2026 04:42:35 +0000 (0:00:00.093) 0:01:48.536 ******* 2026-02-08 04:43:30.169675 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:43:30.169682 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:43:30.169687 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:43:30.169692 | orchestrator | 2026-02-08 04:43:30.169698 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-08 04:43:30.169703 | orchestrator | Sunday 08 February 2026 04:42:50 +0000 (0:00:15.156) 0:02:03.692 ******* 2026-02-08 04:43:30.169708 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:43:30.169713 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:43:30.169719 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:43:30.169724 | orchestrator | 2026-02-08 04:43:30.169729 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-08 04:43:30.169734 | orchestrator | Sunday 08 February 2026 04:43:01 +0000 (0:00:11.432) 0:02:15.125 ******* 2026-02-08 04:43:30.169739 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:43:30.169744 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:43:30.169749 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:43:30.169754 | orchestrator | 2026-02-08 04:43:30.169759 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-08 04:43:30.169765 | orchestrator | Sunday 08 February 2026 04:43:11 +0000 (0:00:09.800) 0:02:24.926 ******* 2026-02-08 04:43:30.169770 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:43:30.169859 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:43:30.169866 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:43:30.169871 | orchestrator | 2026-02-08 04:43:30.169876 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:43:30.169883 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 04:43:30.169889 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:43:30.169894 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:43:30.169899 | orchestrator | 2026-02-08 04:43:30.169905 | orchestrator | 2026-02-08 04:43:30.169910 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:43:30.169915 | orchestrator | Sunday 08 February 2026 04:43:29 +0000 (0:00:18.069) 0:02:42.995 ******* 2026-02-08 04:43:30.169923 | orchestrator | =============================================================================== 2026-02-08 04:43:30.169931 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 33.82s 2026-02-08 04:43:30.169939 | orchestrator | manila : Restart manila-share container -------------------------------- 18.07s 2026-02-08 04:43:30.169956 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.16s 2026-02-08 04:43:30.169964 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.76s 2026-02-08 04:43:30.169972 | orchestrator | manila : Restart manila-data container --------------------------------- 11.43s 2026-02-08 04:43:30.169981 | orchestrator | manila : Restart manila-scheduler container ----------------------------- 9.80s 2026-02-08 04:43:30.169990 | orchestrator | manila : Copying over manila.conf --------------------------------------- 7.44s 2026-02-08 04:43:30.169999 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.22s 2026-02-08 04:43:30.170007 | orchestrator | manila : Copying over config.json files for services -------------------- 4.56s 2026-02-08 04:43:30.170055 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 4.36s 2026-02-08 04:43:30.170064 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.91s 2026-02-08 04:43:30.170071 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.76s 2026-02-08 04:43:30.170093 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.69s 2026-02-08 04:43:30.170101 | orchestrator | manila : Check manila containers ---------------------------------------- 3.41s 2026-02-08 04:43:30.170110 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.19s 2026-02-08 04:43:30.170117 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.19s 2026-02-08 04:43:30.170126 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.23s 2026-02-08 04:43:30.170133 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.21s 2026-02-08 04:43:30.170142 | orchestrator | manila : Creating Manila database --------------------------------------- 2.07s 2026-02-08 04:43:30.170150 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.79s 2026-02-08 04:43:30.756950 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-08 04:43:42.981511 | orchestrator | 2026-02-08 04:43:42 | INFO  | Task 6e1b279c-3724-49a5-8120-b91576763888 (netdata) was prepared for execution. 2026-02-08 04:43:42.981596 | orchestrator | 2026-02-08 04:43:42 | INFO  | It takes a moment until task 6e1b279c-3724-49a5-8120-b91576763888 (netdata) has been started and output is visible here. 2026-02-08 04:45:16.749777 | orchestrator | 2026-02-08 04:45:16.749968 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:45:16.749994 | orchestrator | 2026-02-08 04:45:16.750005 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:45:16.750096 | orchestrator | Sunday 08 February 2026 04:43:48 +0000 (0:00:00.260) 0:00:00.260 ******* 2026-02-08 04:45:16.750109 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-08 04:45:16.750119 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-08 04:45:16.750129 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-08 04:45:16.750138 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-08 04:45:16.750148 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-08 04:45:16.750157 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-08 04:45:16.750167 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-08 04:45:16.750176 | orchestrator | 2026-02-08 04:45:16.750186 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-08 04:45:16.750195 | orchestrator | 2026-02-08 04:45:16.750205 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-08 04:45:16.750229 | orchestrator | Sunday 08 February 2026 04:43:49 +0000 (0:00:00.911) 0:00:01.172 ******* 2026-02-08 04:45:16.750242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:45:16.750265 | orchestrator | 2026-02-08 04:45:16.750275 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-08 04:45:16.750284 | orchestrator | Sunday 08 February 2026 04:43:50 +0000 (0:00:01.397) 0:00:02.569 ******* 2026-02-08 04:45:16.750294 | orchestrator | ok: [testbed-manager] 2026-02-08 04:45:16.750305 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:45:16.750315 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:45:16.750324 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:45:16.750334 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:45:16.750344 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:45:16.750355 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:45:16.750364 | orchestrator | 2026-02-08 04:45:16.750374 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-08 04:45:16.750384 | orchestrator | Sunday 08 February 2026 04:43:52 +0000 (0:00:02.032) 0:00:04.602 ******* 2026-02-08 04:45:16.750394 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:45:16.750404 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:45:16.750413 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:45:16.750423 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:45:16.750432 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:45:16.750442 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:45:16.750452 | orchestrator | ok: [testbed-manager] 2026-02-08 04:45:16.750461 | orchestrator | 2026-02-08 04:45:16.750471 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-08 04:45:16.750481 | orchestrator | Sunday 08 February 2026 04:43:54 +0000 (0:00:02.380) 0:00:06.983 ******* 2026-02-08 04:45:16.750491 | orchestrator | changed: [testbed-manager] 2026-02-08 04:45:16.750501 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:45:16.750510 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:45:16.750520 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:45:16.750530 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:45:16.750539 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:45:16.750549 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:45:16.750559 | orchestrator | 2026-02-08 04:45:16.750569 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-08 04:45:16.750578 | orchestrator | Sunday 08 February 2026 04:43:56 +0000 (0:00:01.556) 0:00:08.539 ******* 2026-02-08 04:45:16.750588 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:45:16.750597 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:45:16.750634 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:45:16.750645 | orchestrator | changed: [testbed-manager] 2026-02-08 04:45:16.750663 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:45:16.750673 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:45:16.750682 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:45:16.750692 | orchestrator | 2026-02-08 04:45:16.750701 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-08 04:45:16.750711 | orchestrator | Sunday 08 February 2026 04:44:11 +0000 (0:00:15.113) 0:00:23.653 ******* 2026-02-08 04:45:16.750721 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:45:16.750730 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:45:16.750755 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:45:16.750765 | orchestrator | changed: [testbed-manager] 2026-02-08 04:45:16.750775 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:45:16.750784 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:45:16.750794 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:45:16.750803 | orchestrator | 2026-02-08 04:45:16.750813 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-08 04:45:16.750823 | orchestrator | Sunday 08 February 2026 04:44:50 +0000 (0:00:38.518) 0:01:02.171 ******* 2026-02-08 04:45:16.750833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:45:16.750845 | orchestrator | 2026-02-08 04:45:16.750855 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-08 04:45:16.750864 | orchestrator | Sunday 08 February 2026 04:44:51 +0000 (0:00:01.697) 0:01:03.869 ******* 2026-02-08 04:45:16.750874 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-08 04:45:16.750885 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-08 04:45:16.750894 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-08 04:45:16.750904 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-08 04:45:16.750943 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-08 04:45:16.750961 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-08 04:45:16.750977 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-08 04:45:16.750992 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-08 04:45:16.751008 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-08 04:45:16.751024 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-08 04:45:16.751038 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-08 04:45:16.751055 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-08 04:45:16.751071 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-08 04:45:16.751087 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-08 04:45:16.751104 | orchestrator | 2026-02-08 04:45:16.751121 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-08 04:45:16.751138 | orchestrator | Sunday 08 February 2026 04:44:55 +0000 (0:00:03.805) 0:01:07.675 ******* 2026-02-08 04:45:16.751153 | orchestrator | ok: [testbed-manager] 2026-02-08 04:45:16.751164 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:45:16.751173 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:45:16.751187 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:45:16.751204 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:45:16.751277 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:45:16.751292 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:45:16.751306 | orchestrator | 2026-02-08 04:45:16.751321 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-08 04:45:16.751336 | orchestrator | Sunday 08 February 2026 04:44:56 +0000 (0:00:01.297) 0:01:08.972 ******* 2026-02-08 04:45:16.751351 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:45:16.751366 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:45:16.751381 | orchestrator | changed: [testbed-manager] 2026-02-08 04:45:16.751410 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:45:16.751426 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:45:16.751442 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:45:16.751459 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:45:16.751476 | orchestrator | 2026-02-08 04:45:16.751492 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-08 04:45:16.751509 | orchestrator | Sunday 08 February 2026 04:44:58 +0000 (0:00:01.314) 0:01:10.287 ******* 2026-02-08 04:45:16.751522 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:45:16.751539 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:45:16.751555 | orchestrator | ok: [testbed-manager] 2026-02-08 04:45:16.751573 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:45:16.751588 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:45:16.751601 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:45:16.751646 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:45:16.751663 | orchestrator | 2026-02-08 04:45:16.751680 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-08 04:45:16.751699 | orchestrator | Sunday 08 February 2026 04:44:59 +0000 (0:00:01.196) 0:01:11.483 ******* 2026-02-08 04:45:16.751717 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:45:16.751728 | orchestrator | ok: [testbed-manager] 2026-02-08 04:45:16.751737 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:45:16.751747 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:45:16.751756 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:45:16.751765 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:45:16.751775 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:45:16.751784 | orchestrator | 2026-02-08 04:45:16.751794 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-08 04:45:16.751803 | orchestrator | Sunday 08 February 2026 04:45:01 +0000 (0:00:01.726) 0:01:13.210 ******* 2026-02-08 04:45:16.751813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-08 04:45:16.751825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:45:16.751836 | orchestrator | 2026-02-08 04:45:16.751846 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-08 04:45:16.751855 | orchestrator | Sunday 08 February 2026 04:45:02 +0000 (0:00:01.540) 0:01:14.750 ******* 2026-02-08 04:45:16.751864 | orchestrator | changed: [testbed-manager] 2026-02-08 04:45:16.751874 | orchestrator | 2026-02-08 04:45:16.751883 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-08 04:45:16.751893 | orchestrator | Sunday 08 February 2026 04:45:04 +0000 (0:00:02.242) 0:01:16.992 ******* 2026-02-08 04:45:16.751902 | orchestrator | changed: [testbed-manager] 2026-02-08 04:45:16.751921 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:45:16.751931 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:45:16.751941 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:45:16.751950 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:45:16.751959 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:45:16.751969 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:45:16.751978 | orchestrator | 2026-02-08 04:45:16.751988 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:45:16.751997 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:45:16.752008 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:45:16.752018 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:45:16.752027 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:45:16.752058 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:45:17.221039 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:45:17.221195 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:45:17.221218 | orchestrator | 2026-02-08 04:45:17.221236 | orchestrator | 2026-02-08 04:45:17.221253 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:45:17.221274 | orchestrator | Sunday 08 February 2026 04:45:16 +0000 (0:00:11.787) 0:01:28.779 ******* 2026-02-08 04:45:17.221293 | orchestrator | =============================================================================== 2026-02-08 04:45:17.221313 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 38.52s 2026-02-08 04:45:17.221334 | orchestrator | osism.services.netdata : Add repository -------------------------------- 15.11s 2026-02-08 04:45:17.221353 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.79s 2026-02-08 04:45:17.221373 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.81s 2026-02-08 04:45:17.221384 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.38s 2026-02-08 04:45:17.221395 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.24s 2026-02-08 04:45:17.221406 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.03s 2026-02-08 04:45:17.221417 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.73s 2026-02-08 04:45:17.221428 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.70s 2026-02-08 04:45:17.221439 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.56s 2026-02-08 04:45:17.221450 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.54s 2026-02-08 04:45:17.221460 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.40s 2026-02-08 04:45:17.221471 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.31s 2026-02-08 04:45:17.221482 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.30s 2026-02-08 04:45:17.221494 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.20s 2026-02-08 04:45:17.221505 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.91s 2026-02-08 04:45:19.766866 | orchestrator | 2026-02-08 04:45:19 | INFO  | Task b1862f16-db43-4d42-8b36-705d37e854ca (prometheus) was prepared for execution. 2026-02-08 04:45:19.766959 | orchestrator | 2026-02-08 04:45:19 | INFO  | It takes a moment until task b1862f16-db43-4d42-8b36-705d37e854ca (prometheus) has been started and output is visible here. 2026-02-08 04:45:29.853556 | orchestrator | 2026-02-08 04:45:29.853725 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:45:29.853745 | orchestrator | 2026-02-08 04:45:29.853758 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:45:29.853770 | orchestrator | Sunday 08 February 2026 04:45:24 +0000 (0:00:00.310) 0:00:00.310 ******* 2026-02-08 04:45:29.853782 | orchestrator | ok: [testbed-manager] 2026-02-08 04:45:29.853794 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:45:29.853805 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:45:29.853816 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:45:29.853827 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:45:29.853837 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:45:29.853848 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:45:29.853859 | orchestrator | 2026-02-08 04:45:29.853870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:45:29.853909 | orchestrator | Sunday 08 February 2026 04:45:25 +0000 (0:00:00.920) 0:00:01.231 ******* 2026-02-08 04:45:29.853922 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-08 04:45:29.853934 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-08 04:45:29.853945 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-08 04:45:29.853970 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-08 04:45:29.853981 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-08 04:45:29.853992 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-08 04:45:29.854003 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-08 04:45:29.854014 | orchestrator | 2026-02-08 04:45:29.854102 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-08 04:45:29.854118 | orchestrator | 2026-02-08 04:45:29.854131 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-08 04:45:29.854143 | orchestrator | Sunday 08 February 2026 04:45:26 +0000 (0:00:01.002) 0:00:02.234 ******* 2026-02-08 04:45:29.854156 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:45:29.854171 | orchestrator | 2026-02-08 04:45:29.854184 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-08 04:45:29.854195 | orchestrator | Sunday 08 February 2026 04:45:27 +0000 (0:00:01.499) 0:00:03.733 ******* 2026-02-08 04:45:29.854214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:29.854234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:29.854255 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-08 04:45:29.854269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:29.854318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:29.854342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:29.854356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:29.854367 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:29.854378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:29.854390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:29.854401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:29.854424 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:30.692169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:30.692265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:30.692273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:30.692279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:30.692284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:30.692289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:45:30.692292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:30.692323 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-08 04:45:30.692332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:30.692336 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:45:30.692340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:30.692345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:30.692349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:30.692357 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:30.692366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:36.080714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:45:36.080796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:36.080807 | orchestrator | 2026-02-08 04:45:36.080816 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-08 04:45:36.080824 | orchestrator | Sunday 08 February 2026 04:45:30 +0000 (0:00:03.064) 0:00:06.797 ******* 2026-02-08 04:45:36.080830 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 04:45:36.080841 | orchestrator | 2026-02-08 04:45:36.080847 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-08 04:45:36.080854 | orchestrator | Sunday 08 February 2026 04:45:32 +0000 (0:00:01.781) 0:00:08.579 ******* 2026-02-08 04:45:36.080860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:36.080868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:36.080876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:36.080906 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-08 04:45:36.080923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:36.080935 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:36.080941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:36.080948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:36.080955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:36.080961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:36.080975 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:36.080982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:36.080995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:37.867040 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:37.867111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:37.867118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:37.867123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:37.867148 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:37.867154 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:45:37.867159 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:45:37.867177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:45:37.867183 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-08 04:45:37.867188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:37.867196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:37.867200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:37.867204 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:37.867208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:37.867219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:38.914155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:38.914227 | orchestrator | 2026-02-08 04:45:38.914234 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-08 04:45:38.914239 | orchestrator | Sunday 08 February 2026 04:45:37 +0000 (0:00:05.380) 0:00:13.959 ******* 2026-02-08 04:45:38.914247 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-08 04:45:38.914269 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:38.914274 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:38.914282 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-08 04:45:38.914306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:38.914310 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:38.914316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:38.914324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:38.914329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:38.914333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:38.914337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:38.914341 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:45:38.914346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:38.914354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:39.625988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:39.626227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:39.626253 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:45:39.626268 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:45:39.626282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:39.626297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:39.626311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:39.626326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:39.626346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:39.626361 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:45:39.626397 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:39.626423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:39.626439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 04:45:39.626453 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:45:39.626469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:39.626484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:39.626498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 04:45:39.626512 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:45:39.626527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:39.626559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:40.718343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 04:45:40.718443 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:45:40.718459 | orchestrator | 2026-02-08 04:45:40.718470 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-08 04:45:40.718480 | orchestrator | Sunday 08 February 2026 04:45:39 +0000 (0:00:01.765) 0:00:15.725 ******* 2026-02-08 04:45:40.718491 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-08 04:45:40.718502 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:40.718512 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:40.718627 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-08 04:45:40.718692 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:40.718704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:40.718713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:40.718722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:40.718731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:40.718740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:40.718749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:40.718764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:40.718786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:42.388031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:42.388124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:42.388137 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:45:42.388148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:42.388158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:42.388167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:42.388178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:42.388229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 04:45:42.388239 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:45:42.388247 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:45:42.388254 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:45:42.388273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:42.388281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:42.388311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 04:45:42.388320 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:45:42.388329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:42.388337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:42.388345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 04:45:42.388356 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:45:42.388364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 04:45:42.388376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 04:45:46.305777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 04:45:46.305909 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:45:46.305929 | orchestrator | 2026-02-08 04:45:46.305942 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-08 04:45:46.306684 | orchestrator | Sunday 08 February 2026 04:45:42 +0000 (0:00:02.756) 0:00:18.482 ******* 2026-02-08 04:45:46.306717 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-08 04:45:46.306732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:46.306744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:46.306791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:46.306834 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:46.306883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:46.306903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:46.306921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:46.306939 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:45:46.306956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:46.306976 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:46.307012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:46.307042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:46.307077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:48.656113 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:48.656244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:48.656270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:48.656290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:48.656340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:45:48.656361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:45:48.656395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:45:48.656436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:48.656458 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-08 04:45:48.656511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:48.656543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:45:48.656589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:48.656615 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:48.656635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:48.656665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:45:52.871935 | orchestrator | 2026-02-08 04:45:52.872054 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-08 04:45:52.872071 | orchestrator | Sunday 08 February 2026 04:45:48 +0000 (0:00:06.271) 0:00:24.754 ******* 2026-02-08 04:45:52.872082 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 04:45:52.872095 | orchestrator | 2026-02-08 04:45:52.872107 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-08 04:45:52.872118 | orchestrator | Sunday 08 February 2026 04:45:49 +0000 (0:00:00.912) 0:00:25.666 ******* 2026-02-08 04:45:52.872131 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097985, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872175 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097985, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872188 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097985, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872200 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097985, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:45:52.872226 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098023, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2450964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872239 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097985, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872270 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098023, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2450964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872282 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097985, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872301 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098023, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2450964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872312 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1097985, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872323 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097975, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2347116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872340 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098023, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2450964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872352 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097975, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2347116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:52.872371 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097975, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2347116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676685 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098023, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2450964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676809 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098023, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2450964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676829 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098010, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2435524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676842 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097975, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2347116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676870 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097975, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2347116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676884 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098010, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2435524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676896 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098023, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2450964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:45:54.676938 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098010, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2435524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676953 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097975, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2347116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676966 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098010, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2435524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676978 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097968, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2326145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.676996 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098010, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2435524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.677008 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097968, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2326145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.677020 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097968, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2326145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:54.677061 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098010, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2435524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724502 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097968, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2326145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724668 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097988, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2366054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724683 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097968, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2326145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724710 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097968, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2326145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724720 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097988, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2366054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724729 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097988, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2366054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724756 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1097975, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2347116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:45:56.724783 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097988, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2366054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724793 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097988, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2366054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724802 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098005, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2420793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724816 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098005, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2420793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724825 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097988, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2366054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724834 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098005, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2420793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724851 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097992, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2374997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:56.724867 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098005, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2420793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748512 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098010, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2435524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:45:58.748667 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098005, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2420793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748701 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097992, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2374997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748714 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097992, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2374997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748748 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097982, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748760 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097992, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2374997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748775 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098005, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2420793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748819 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097992, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2374997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748839 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097982, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748867 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097982, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748888 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097982, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748919 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097992, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2374997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748931 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098020, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2446494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748942 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097982, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:45:58.748962 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097982, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281352 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098020, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2446494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281427 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098020, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2446494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281437 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098020, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2446494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281460 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098020, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2446494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281467 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1097968, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2326145, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:00.281473 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098020, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2446494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281479 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097959, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2314725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281495 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097959, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2314725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281505 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097959, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2314725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281516 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097959, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2314725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281522 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097959, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2314725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281528 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098044, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281579 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097959, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2314725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281592 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098044, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:00.281609 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098044, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.900902 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098018, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2440794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901002 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098044, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901021 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098044, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901033 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1097988, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2366054, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:01.901045 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098018, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2440794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901056 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098044, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901068 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098018, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2440794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901101 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098018, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2440794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901123 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098018, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2440794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901136 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097972, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2330792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901148 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097972, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2330792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901159 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097972, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2330792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901166 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098018, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2440794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901174 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097972, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2330792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:01.901200 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097972, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2330792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507399 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097963, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.231934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507481 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097963, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.231934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507495 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097963, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.231934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507520 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097972, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2330792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507546 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098005, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2420793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:03.507566 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098004, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2415838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507606 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097963, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.231934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507633 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097963, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.231934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507644 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098004, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2415838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507654 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098004, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2415838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507664 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097963, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.231934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507675 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097995, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2410793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507685 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098004, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2415838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507706 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097995, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2410793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:03.507723 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098004, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2415838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845098 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097995, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2410793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845195 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098004, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2415838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845207 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098041, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845216 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:46:10.845226 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097995, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2410793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845234 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098041, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845261 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:46:10.845281 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097995, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2410793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845303 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1097992, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2374997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:10.845311 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097995, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2410793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845319 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098041, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845326 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:10.845334 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098041, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845341 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:46:10.845349 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098041, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845363 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:10.845371 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098041, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-08 04:46:10.845378 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:10.845389 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1097982, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2350793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:10.845403 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098020, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2446494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:36.819401 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097959, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2314725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:36.819529 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098044, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:36.819540 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098018, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2440794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:36.819571 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1097972, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2330792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:36.819578 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1097963, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.231934, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:36.819597 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098004, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2415838, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:36.819603 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1097995, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2410793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:36.819623 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098041, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2480793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-08 04:46:36.819629 | orchestrator | 2026-02-08 04:46:36.819637 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-08 04:46:36.819645 | orchestrator | Sunday 08 February 2026 04:46:16 +0000 (0:00:26.556) 0:00:52.222 ******* 2026-02-08 04:46:36.819651 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 04:46:36.819658 | orchestrator | 2026-02-08 04:46:36.819665 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-08 04:46:36.819671 | orchestrator | Sunday 08 February 2026 04:46:16 +0000 (0:00:00.770) 0:00:52.993 ******* 2026-02-08 04:46:36.819677 | orchestrator | [WARNING]: Skipped 2026-02-08 04:46:36.819685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819691 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-08 04:46:36.819697 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819703 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-08 04:46:36.819716 | orchestrator | [WARNING]: Skipped 2026-02-08 04:46:36.819722 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819728 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-08 04:46:36.819734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819740 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-08 04:46:36.819745 | orchestrator | [WARNING]: Skipped 2026-02-08 04:46:36.819751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819757 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-08 04:46:36.819763 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819768 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-08 04:46:36.819774 | orchestrator | [WARNING]: Skipped 2026-02-08 04:46:36.819780 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819786 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-08 04:46:36.819792 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819798 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-08 04:46:36.819803 | orchestrator | [WARNING]: Skipped 2026-02-08 04:46:36.819809 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819815 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-08 04:46:36.819821 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819827 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-08 04:46:36.819832 | orchestrator | [WARNING]: Skipped 2026-02-08 04:46:36.819838 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819844 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-08 04:46:36.819850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819855 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-08 04:46:36.819861 | orchestrator | [WARNING]: Skipped 2026-02-08 04:46:36.819867 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819873 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-08 04:46:36.819878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-08 04:46:36.819884 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-08 04:46:36.819890 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 04:46:36.819896 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-08 04:46:36.819902 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:46:36.819911 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-08 04:46:36.819918 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-08 04:46:36.819924 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-08 04:46:36.819929 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-08 04:46:36.819935 | orchestrator | 2026-02-08 04:46:36.819941 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-08 04:46:36.819947 | orchestrator | Sunday 08 February 2026 04:46:18 +0000 (0:00:01.937) 0:00:54.930 ******* 2026-02-08 04:46:36.819953 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-08 04:46:36.819960 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:46:36.819966 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-08 04:46:36.819973 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:46:36.819979 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-08 04:46:36.819990 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:36.820000 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-08 04:46:54.395683 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:46:54.395778 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-08 04:46:54.395790 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:54.395799 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-08 04:46:54.395806 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:54.395814 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-08 04:46:54.395821 | orchestrator | 2026-02-08 04:46:54.395830 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-08 04:46:54.395838 | orchestrator | Sunday 08 February 2026 04:46:36 +0000 (0:00:17.987) 0:01:12.918 ******* 2026-02-08 04:46:54.395845 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-08 04:46:54.395852 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-08 04:46:54.395860 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:46:54.395867 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:46:54.395874 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-08 04:46:54.395881 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:46:54.395888 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-08 04:46:54.395895 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:54.395902 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-08 04:46:54.395910 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:54.395917 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-08 04:46:54.395925 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:54.395932 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-08 04:46:54.395939 | orchestrator | 2026-02-08 04:46:54.395946 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-08 04:46:54.395954 | orchestrator | Sunday 08 February 2026 04:46:39 +0000 (0:00:02.851) 0:01:15.769 ******* 2026-02-08 04:46:54.395961 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-08 04:46:54.395970 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-08 04:46:54.395977 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:46:54.395984 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:46:54.395991 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-08 04:46:54.395999 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:46:54.396007 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-08 04:46:54.396017 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:54.396028 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-08 04:46:54.396040 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:54.396052 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-08 04:46:54.396065 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-08 04:46:54.396107 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:54.396120 | orchestrator | 2026-02-08 04:46:54.396133 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-08 04:46:54.396146 | orchestrator | Sunday 08 February 2026 04:46:41 +0000 (0:00:01.933) 0:01:17.703 ******* 2026-02-08 04:46:54.396159 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 04:46:54.396171 | orchestrator | 2026-02-08 04:46:54.396184 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-08 04:46:54.396213 | orchestrator | Sunday 08 February 2026 04:46:42 +0000 (0:00:00.774) 0:01:18.477 ******* 2026-02-08 04:46:54.396227 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:46:54.396243 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:46:54.396257 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:46:54.396270 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:46:54.396284 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:54.396297 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:54.396309 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:54.396322 | orchestrator | 2026-02-08 04:46:54.396337 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-08 04:46:54.396351 | orchestrator | Sunday 08 February 2026 04:46:43 +0000 (0:00:00.825) 0:01:19.303 ******* 2026-02-08 04:46:54.396363 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:46:54.396370 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:54.396378 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:54.396385 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:54.396392 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:46:54.396399 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:46:54.396406 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:46:54.396413 | orchestrator | 2026-02-08 04:46:54.396421 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-08 04:46:54.396444 | orchestrator | Sunday 08 February 2026 04:46:45 +0000 (0:00:02.435) 0:01:21.739 ******* 2026-02-08 04:46:54.396452 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-08 04:46:54.396487 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:46:54.396495 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-08 04:46:54.396502 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-08 04:46:54.396509 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-08 04:46:54.396517 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:46:54.396524 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:46:54.396531 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:46:54.396538 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-08 04:46:54.396545 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:54.396552 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-08 04:46:54.396560 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:54.396567 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-08 04:46:54.396574 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:54.396581 | orchestrator | 2026-02-08 04:46:54.396589 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-08 04:46:54.396596 | orchestrator | Sunday 08 February 2026 04:46:47 +0000 (0:00:01.571) 0:01:23.311 ******* 2026-02-08 04:46:54.396603 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-08 04:46:54.396611 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:46:54.396619 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-08 04:46:54.396635 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:46:54.396642 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-08 04:46:54.396650 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:54.396657 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-08 04:46:54.396664 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:46:54.396671 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-08 04:46:54.396679 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:54.396686 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-08 04:46:54.396693 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:54.396700 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-08 04:46:54.396708 | orchestrator | 2026-02-08 04:46:54.396715 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-08 04:46:54.396722 | orchestrator | Sunday 08 February 2026 04:46:48 +0000 (0:00:01.568) 0:01:24.879 ******* 2026-02-08 04:46:54.396730 | orchestrator | [WARNING]: Skipped 2026-02-08 04:46:54.396739 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-08 04:46:54.396746 | orchestrator | due to this access issue: 2026-02-08 04:46:54.396753 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-08 04:46:54.396761 | orchestrator | not a directory 2026-02-08 04:46:54.396768 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 04:46:54.396775 | orchestrator | 2026-02-08 04:46:54.396782 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-08 04:46:54.396789 | orchestrator | Sunday 08 February 2026 04:46:49 +0000 (0:00:01.205) 0:01:26.084 ******* 2026-02-08 04:46:54.396796 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:46:54.396804 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:46:54.396811 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:46:54.396818 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:46:54.396825 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:54.396832 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:54.396839 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:54.396846 | orchestrator | 2026-02-08 04:46:54.396859 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-08 04:46:54.396867 | orchestrator | Sunday 08 February 2026 04:46:50 +0000 (0:00:01.006) 0:01:27.091 ******* 2026-02-08 04:46:54.396874 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:46:54.396881 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:46:54.396888 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:46:54.396895 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:46:54.396903 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:46:54.396910 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:46:54.396917 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:46:54.396924 | orchestrator | 2026-02-08 04:46:54.396931 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-08 04:46:54.396938 | orchestrator | Sunday 08 February 2026 04:46:51 +0000 (0:00:00.981) 0:01:28.073 ******* 2026-02-08 04:46:54.396954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:46:56.167528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:46:56.167598 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-08 04:46:56.167605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:46:56.167609 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:46:56.167613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:46:56.167630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:56.167634 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:46:56.167665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:56.167669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-08 04:46:56.167674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:46:56.167679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:56.167683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:46:56.167691 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:46:56.167696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:56.167701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:56.167714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:46:58.224748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:46:58.224843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:46:58.224858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:58.224888 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-08 04:46:58.224901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-08 04:46:58.224931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:46:58.224958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:46:58.224969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-08 04:46:58.224980 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:58.225004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:58.225025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:58.225040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 04:46:58.225057 | orchestrator | 2026-02-08 04:46:58.225069 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-08 04:46:58.225080 | orchestrator | Sunday 08 February 2026 04:46:56 +0000 (0:00:04.201) 0:01:32.274 ******* 2026-02-08 04:46:58.225090 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-08 04:46:58.225100 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:46:58.225110 | orchestrator | 2026-02-08 04:46:58.225120 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-08 04:46:58.225130 | orchestrator | Sunday 08 February 2026 04:46:57 +0000 (0:00:01.306) 0:01:33.580 ******* 2026-02-08 04:46:58.225139 | orchestrator | 2026-02-08 04:46:58.225149 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-08 04:46:58.225158 | orchestrator | Sunday 08 February 2026 04:46:57 +0000 (0:00:00.259) 0:01:33.840 ******* 2026-02-08 04:46:58.225168 | orchestrator | 2026-02-08 04:46:58.225178 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-08 04:46:58.225188 | orchestrator | Sunday 08 February 2026 04:46:57 +0000 (0:00:00.080) 0:01:33.921 ******* 2026-02-08 04:46:58.225197 | orchestrator | 2026-02-08 04:46:58.225207 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-08 04:46:58.225216 | orchestrator | Sunday 08 February 2026 04:46:57 +0000 (0:00:00.088) 0:01:34.009 ******* 2026-02-08 04:46:58.225226 | orchestrator | 2026-02-08 04:46:58.225235 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-08 04:46:58.225245 | orchestrator | Sunday 08 February 2026 04:46:57 +0000 (0:00:00.068) 0:01:34.078 ******* 2026-02-08 04:46:58.225254 | orchestrator | 2026-02-08 04:46:58.225266 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-08 04:46:58.225277 | orchestrator | Sunday 08 February 2026 04:46:58 +0000 (0:00:00.069) 0:01:34.148 ******* 2026-02-08 04:46:58.225287 | orchestrator | 2026-02-08 04:46:58.225299 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-08 04:46:58.225316 | orchestrator | Sunday 08 February 2026 04:46:58 +0000 (0:00:00.074) 0:01:34.222 ******* 2026-02-08 04:48:50.235843 | orchestrator | 2026-02-08 04:48:50.235966 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-08 04:48:50.235987 | orchestrator | Sunday 08 February 2026 04:46:58 +0000 (0:00:00.098) 0:01:34.321 ******* 2026-02-08 04:48:50.235999 | orchestrator | changed: [testbed-manager] 2026-02-08 04:48:50.236012 | orchestrator | 2026-02-08 04:48:50.236025 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-08 04:48:50.236037 | orchestrator | Sunday 08 February 2026 04:47:19 +0000 (0:00:20.848) 0:01:55.170 ******* 2026-02-08 04:48:50.236049 | orchestrator | changed: [testbed-manager] 2026-02-08 04:48:50.236062 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:48:50.236074 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:48:50.236087 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:48:50.236100 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:48:50.236112 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:48:50.236125 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:48:50.236136 | orchestrator | 2026-02-08 04:48:50.236148 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-08 04:48:50.236159 | orchestrator | Sunday 08 February 2026 04:47:32 +0000 (0:00:13.940) 0:02:09.110 ******* 2026-02-08 04:48:50.236171 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:48:50.236183 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:48:50.236196 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:48:50.236209 | orchestrator | 2026-02-08 04:48:50.236221 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-08 04:48:50.236235 | orchestrator | Sunday 08 February 2026 04:47:43 +0000 (0:00:10.537) 0:02:19.647 ******* 2026-02-08 04:48:50.236247 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:48:50.236259 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:48:50.236298 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:48:50.236337 | orchestrator | 2026-02-08 04:48:50.236349 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-08 04:48:50.236362 | orchestrator | Sunday 08 February 2026 04:47:54 +0000 (0:00:11.171) 0:02:30.819 ******* 2026-02-08 04:48:50.236374 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:48:50.236387 | orchestrator | changed: [testbed-manager] 2026-02-08 04:48:50.236399 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:48:50.236411 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:48:50.236423 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:48:50.236435 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:48:50.236446 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:48:50.236456 | orchestrator | 2026-02-08 04:48:50.236468 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-08 04:48:50.236480 | orchestrator | Sunday 08 February 2026 04:48:09 +0000 (0:00:14.594) 0:02:45.414 ******* 2026-02-08 04:48:50.236491 | orchestrator | changed: [testbed-manager] 2026-02-08 04:48:50.236503 | orchestrator | 2026-02-08 04:48:50.236514 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-08 04:48:50.236526 | orchestrator | Sunday 08 February 2026 04:48:17 +0000 (0:00:08.152) 0:02:53.567 ******* 2026-02-08 04:48:50.236538 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:48:50.236550 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:48:50.236561 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:48:50.236572 | orchestrator | 2026-02-08 04:48:50.236584 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-08 04:48:50.236595 | orchestrator | Sunday 08 February 2026 04:48:28 +0000 (0:00:10.743) 0:03:04.310 ******* 2026-02-08 04:48:50.236607 | orchestrator | changed: [testbed-manager] 2026-02-08 04:48:50.236618 | orchestrator | 2026-02-08 04:48:50.236629 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-08 04:48:50.236657 | orchestrator | Sunday 08 February 2026 04:48:38 +0000 (0:00:10.755) 0:03:15.066 ******* 2026-02-08 04:48:50.236668 | orchestrator | changed: [testbed-node-3] 2026-02-08 04:48:50.236680 | orchestrator | changed: [testbed-node-4] 2026-02-08 04:48:50.236690 | orchestrator | changed: [testbed-node-5] 2026-02-08 04:48:50.236702 | orchestrator | 2026-02-08 04:48:50.236713 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:48:50.236727 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-08 04:48:50.236741 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-08 04:48:50.236752 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-08 04:48:50.236766 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-08 04:48:50.236777 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-08 04:48:50.236788 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-08 04:48:50.236800 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-08 04:48:50.236811 | orchestrator | 2026-02-08 04:48:50.236822 | orchestrator | 2026-02-08 04:48:50.236834 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:48:50.236846 | orchestrator | Sunday 08 February 2026 04:48:49 +0000 (0:00:10.658) 0:03:25.725 ******* 2026-02-08 04:48:50.236857 | orchestrator | =============================================================================== 2026-02-08 04:48:50.236880 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.56s 2026-02-08 04:48:50.236913 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.85s 2026-02-08 04:48:50.236925 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.99s 2026-02-08 04:48:50.236937 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.59s 2026-02-08 04:48:50.236948 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.94s 2026-02-08 04:48:50.236959 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.17s 2026-02-08 04:48:50.236970 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.76s 2026-02-08 04:48:50.236982 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.74s 2026-02-08 04:48:50.236994 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.66s 2026-02-08 04:48:50.237005 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.54s 2026-02-08 04:48:50.237016 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.15s 2026-02-08 04:48:50.237027 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.27s 2026-02-08 04:48:50.237038 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.38s 2026-02-08 04:48:50.237049 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.20s 2026-02-08 04:48:50.237061 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.06s 2026-02-08 04:48:50.237073 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.85s 2026-02-08 04:48:50.237085 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.76s 2026-02-08 04:48:50.237096 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.44s 2026-02-08 04:48:50.237107 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.94s 2026-02-08 04:48:50.237118 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.93s 2026-02-08 04:48:53.024781 | orchestrator | 2026-02-08 04:48:53 | INFO  | Task 24d3955b-2f73-4817-810b-52eb168546ff (grafana) was prepared for execution. 2026-02-08 04:48:53.024851 | orchestrator | 2026-02-08 04:48:53 | INFO  | It takes a moment until task 24d3955b-2f73-4817-810b-52eb168546ff (grafana) has been started and output is visible here. 2026-02-08 04:49:04.201886 | orchestrator | 2026-02-08 04:49:04.201972 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:49:04.201982 | orchestrator | 2026-02-08 04:49:04.201988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:49:04.201994 | orchestrator | Sunday 08 February 2026 04:48:58 +0000 (0:00:00.290) 0:00:00.290 ******* 2026-02-08 04:49:04.202001 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:49:04.202007 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:49:04.202012 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:49:04.202061 | orchestrator | 2026-02-08 04:49:04.202068 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:49:04.202074 | orchestrator | Sunday 08 February 2026 04:48:58 +0000 (0:00:00.344) 0:00:00.634 ******* 2026-02-08 04:49:04.202079 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-08 04:49:04.202097 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-08 04:49:04.202103 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-08 04:49:04.202108 | orchestrator | 2026-02-08 04:49:04.202113 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-08 04:49:04.202119 | orchestrator | 2026-02-08 04:49:04.202124 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-08 04:49:04.202129 | orchestrator | Sunday 08 February 2026 04:48:58 +0000 (0:00:00.511) 0:00:01.146 ******* 2026-02-08 04:49:04.202153 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:49:04.202159 | orchestrator | 2026-02-08 04:49:04.202165 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-08 04:49:04.202170 | orchestrator | Sunday 08 February 2026 04:48:59 +0000 (0:00:00.659) 0:00:01.805 ******* 2026-02-08 04:49:04.202177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:04.202187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:04.202192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:04.202198 | orchestrator | 2026-02-08 04:49:04.202203 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-08 04:49:04.202208 | orchestrator | Sunday 08 February 2026 04:49:00 +0000 (0:00:01.083) 0:00:02.889 ******* 2026-02-08 04:49:04.202214 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-08 04:49:04.202220 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-08 04:49:04.202225 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:49:04.202230 | orchestrator | 2026-02-08 04:49:04.202235 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-08 04:49:04.202240 | orchestrator | Sunday 08 February 2026 04:49:01 +0000 (0:00:00.941) 0:00:03.831 ******* 2026-02-08 04:49:04.202246 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:49:04.202251 | orchestrator | 2026-02-08 04:49:04.202256 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-08 04:49:04.202261 | orchestrator | Sunday 08 February 2026 04:49:02 +0000 (0:00:00.698) 0:00:04.529 ******* 2026-02-08 04:49:04.202282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:04.202337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:04.202344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:04.202350 | orchestrator | 2026-02-08 04:49:04.202355 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-08 04:49:04.202360 | orchestrator | Sunday 08 February 2026 04:49:03 +0000 (0:00:01.289) 0:00:05.818 ******* 2026-02-08 04:49:04.202365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-08 04:49:04.202371 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:49:04.202376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-08 04:49:04.202382 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:49:04.202394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-08 04:49:11.390386 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:49:11.390485 | orchestrator | 2026-02-08 04:49:11.390499 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-08 04:49:11.390510 | orchestrator | Sunday 08 February 2026 04:49:04 +0000 (0:00:00.640) 0:00:06.459 ******* 2026-02-08 04:49:11.390536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-08 04:49:11.390549 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:49:11.390559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-08 04:49:11.390568 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:49:11.390577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-08 04:49:11.390586 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:49:11.390595 | orchestrator | 2026-02-08 04:49:11.390605 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-08 04:49:11.390614 | orchestrator | Sunday 08 February 2026 04:49:04 +0000 (0:00:00.734) 0:00:07.193 ******* 2026-02-08 04:49:11.390623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:11.390633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:11.390681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:11.390693 | orchestrator | 2026-02-08 04:49:11.390701 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-08 04:49:11.390710 | orchestrator | Sunday 08 February 2026 04:49:06 +0000 (0:00:01.384) 0:00:08.578 ******* 2026-02-08 04:49:11.390719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:11.390728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:11.390737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:49:11.390746 | orchestrator | 2026-02-08 04:49:11.390755 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-08 04:49:11.390764 | orchestrator | Sunday 08 February 2026 04:49:07 +0000 (0:00:01.671) 0:00:10.250 ******* 2026-02-08 04:49:11.390773 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:49:11.390781 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:49:11.390790 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:49:11.390799 | orchestrator | 2026-02-08 04:49:11.390807 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-08 04:49:11.390824 | orchestrator | Sunday 08 February 2026 04:49:08 +0000 (0:00:00.347) 0:00:10.598 ******* 2026-02-08 04:49:11.390832 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-08 04:49:11.390842 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-08 04:49:11.390851 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-08 04:49:11.390859 | orchestrator | 2026-02-08 04:49:11.390869 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-08 04:49:11.390880 | orchestrator | Sunday 08 February 2026 04:49:09 +0000 (0:00:01.266) 0:00:11.865 ******* 2026-02-08 04:49:11.390890 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-08 04:49:11.390901 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-08 04:49:11.390911 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-08 04:49:11.390921 | orchestrator | 2026-02-08 04:49:11.390931 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-08 04:49:11.390946 | orchestrator | Sunday 08 February 2026 04:49:11 +0000 (0:00:01.775) 0:00:13.640 ******* 2026-02-08 04:49:18.142975 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:49:18.143055 | orchestrator | 2026-02-08 04:49:18.143063 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-08 04:49:18.143070 | orchestrator | Sunday 08 February 2026 04:49:12 +0000 (0:00:00.831) 0:00:14.471 ******* 2026-02-08 04:49:18.143090 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-08 04:49:18.143097 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-08 04:49:18.143103 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:49:18.143110 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:49:18.143116 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:49:18.143121 | orchestrator | 2026-02-08 04:49:18.143127 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-08 04:49:18.143133 | orchestrator | Sunday 08 February 2026 04:49:12 +0000 (0:00:00.746) 0:00:15.217 ******* 2026-02-08 04:49:18.143139 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:49:18.143145 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:49:18.143151 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:49:18.143157 | orchestrator | 2026-02-08 04:49:18.143163 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-08 04:49:18.143169 | orchestrator | Sunday 08 February 2026 04:49:13 +0000 (0:00:00.410) 0:00:15.628 ******* 2026-02-08 04:49:18.143177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097685, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1520777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097685, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1520777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1097685, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1520777, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097746, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1664503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097746, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1664503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1097746, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1664503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097707, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1553216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097707, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1553216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1097707, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1553216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097747, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.169078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097747, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.169078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:18.143341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1097747, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.169078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097722, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.160365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097722, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.160365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1097722, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.160365, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097740, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.165078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097740, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.165078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1097740, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.165078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097683, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1505666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097683, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1505666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1097683, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1505666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097691, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1530778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097691, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1530778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1097691, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1530778, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:21.816818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097708, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1560807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.609835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097708, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1560807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.609943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1097708, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1560807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.609985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097729, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.162078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.609998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097729, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.162078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.610010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1097729, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.162078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.610099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097745, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.166078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.610162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097745, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.166078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.610176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1097745, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.166078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.610198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097696, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1553216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.610210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097696, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1553216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.610222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1097696, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1553216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.610233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097736, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.164078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:25.610259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097736, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.164078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.876882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1097736, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.164078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097725, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.161679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097725, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.161679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1097725, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.161679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097714, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.159078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097714, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.159078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1097714, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.159078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097713, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.158078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097713, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.158078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1097713, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.158078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097732, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1637294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097732, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1637294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:29.877133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1097732, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1637294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.759874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097710, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1563332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.759969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097710, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1563332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.759981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1097710, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1563332, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.759992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097744, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1657617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.760002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097744, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1657617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.760026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1097744, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1657617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.760072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097939, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.229079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.760083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097939, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.229079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.760092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1097939, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.229079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.760101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097782, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1810782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.760110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097782, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1810782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.760124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1097782, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1810782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:33.760148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097760, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1720781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.368843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097760, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1720781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.368940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1097760, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1720781, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.368956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097869, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2103922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.368968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097869, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2103922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.368995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1097869, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2103922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.369028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097753, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.169679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.369058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097753, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.169679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.369070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1097753, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.169679, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.369081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097912, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.219398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.369090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097912, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.219398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.369097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1097912, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.219398, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.369115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097875, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2175004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:37.369137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097875, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2175004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.343990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1097875, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2175004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097915, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.220079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097915, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.220079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1097915, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.220079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097933, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2270792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097933, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2270792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1097933, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2270792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097908, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2184708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097908, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2184708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1097908, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2184708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097794, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1830783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097794, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1830783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:41.344191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1097794, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1830783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097776, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1770782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097776, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1770782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1097776, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1770782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097786, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1818447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097786, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1818447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1097786, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1818447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097765, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1750782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097765, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1750782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1097765, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1750782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097795, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1840782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097795, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1840782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1097795, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1840782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:45.454967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097926, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.225079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097926, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.225079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1097926, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.225079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097922, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.223079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097922, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.223079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1097922, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.223079, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097754, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.170078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097754, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.170078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1097754, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.170078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1097757, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1710782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1097757, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1710782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1097757, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.1710782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097906, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2180789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:49:49.114543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097906, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2180789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:51:19.983009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1097906, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2180789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:51:19.983317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097919, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2210789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:51:19.983372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097919, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2210789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:51:19.983395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1097919, 'dev': 172, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770518938.2210789, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-08 04:51:19.983413 | orchestrator | 2026-02-08 04:51:19.983433 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-08 04:51:19.983479 | orchestrator | Sunday 08 February 2026 04:49:50 +0000 (0:00:36.990) 0:00:52.618 ******* 2026-02-08 04:51:19.983499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:51:19.983544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:51:19.983575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-08 04:51:19.983587 | orchestrator | 2026-02-08 04:51:19.983599 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-08 04:51:19.983609 | orchestrator | Sunday 08 February 2026 04:49:51 +0000 (0:00:01.106) 0:00:53.725 ******* 2026-02-08 04:51:19.983619 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:51:19.983630 | orchestrator | 2026-02-08 04:51:19.983639 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-08 04:51:19.983649 | orchestrator | Sunday 08 February 2026 04:49:53 +0000 (0:00:02.270) 0:00:55.995 ******* 2026-02-08 04:51:19.983658 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:51:19.983668 | orchestrator | 2026-02-08 04:51:19.983677 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-08 04:51:19.983686 | orchestrator | Sunday 08 February 2026 04:49:56 +0000 (0:00:02.317) 0:00:58.313 ******* 2026-02-08 04:51:19.983696 | orchestrator | 2026-02-08 04:51:19.983704 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-08 04:51:19.983712 | orchestrator | Sunday 08 February 2026 04:49:56 +0000 (0:00:00.086) 0:00:58.399 ******* 2026-02-08 04:51:19.983720 | orchestrator | 2026-02-08 04:51:19.983727 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-08 04:51:19.983735 | orchestrator | Sunday 08 February 2026 04:49:56 +0000 (0:00:00.083) 0:00:58.482 ******* 2026-02-08 04:51:19.983742 | orchestrator | 2026-02-08 04:51:19.983757 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-08 04:51:19.983771 | orchestrator | Sunday 08 February 2026 04:49:56 +0000 (0:00:00.125) 0:00:58.608 ******* 2026-02-08 04:51:19.983783 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:51:19.983796 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:51:19.983809 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:51:19.983821 | orchestrator | 2026-02-08 04:51:19.983833 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-08 04:51:19.983845 | orchestrator | Sunday 08 February 2026 04:50:03 +0000 (0:00:07.139) 0:01:05.747 ******* 2026-02-08 04:51:19.983858 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:51:19.983872 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:51:19.983885 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-08 04:51:19.983901 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-08 04:51:19.983914 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-08 04:51:19.983928 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:51:19.983939 | orchestrator | 2026-02-08 04:51:19.983947 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-08 04:51:19.983955 | orchestrator | Sunday 08 February 2026 04:50:41 +0000 (0:00:38.491) 0:01:44.239 ******* 2026-02-08 04:51:19.983963 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:51:19.983970 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:51:19.983978 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:51:19.983993 | orchestrator | 2026-02-08 04:51:19.984001 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-08 04:51:19.984008 | orchestrator | Sunday 08 February 2026 04:51:14 +0000 (0:00:32.676) 0:02:16.916 ******* 2026-02-08 04:51:19.984016 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:51:19.984024 | orchestrator | 2026-02-08 04:51:19.984032 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-08 04:51:19.984040 | orchestrator | Sunday 08 February 2026 04:51:16 +0000 (0:00:02.304) 0:02:19.220 ******* 2026-02-08 04:51:19.984047 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:51:19.984055 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:51:19.984063 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:51:19.984070 | orchestrator | 2026-02-08 04:51:19.984078 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-08 04:51:19.984086 | orchestrator | Sunday 08 February 2026 04:51:17 +0000 (0:00:00.326) 0:02:19.547 ******* 2026-02-08 04:51:19.984095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-08 04:51:19.984114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-08 04:51:20.759551 | orchestrator | 2026-02-08 04:51:20.759674 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-08 04:51:20.759693 | orchestrator | Sunday 08 February 2026 04:51:19 +0000 (0:00:02.689) 0:02:22.237 ******* 2026-02-08 04:51:20.759703 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:51:20.759713 | orchestrator | 2026-02-08 04:51:20.759722 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:51:20.759732 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 04:51:20.759742 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 04:51:20.759751 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-08 04:51:20.759760 | orchestrator | 2026-02-08 04:51:20.759769 | orchestrator | 2026-02-08 04:51:20.759777 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:51:20.759786 | orchestrator | Sunday 08 February 2026 04:51:20 +0000 (0:00:00.326) 0:02:22.563 ******* 2026-02-08 04:51:20.759795 | orchestrator | =============================================================================== 2026-02-08 04:51:20.759803 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.49s 2026-02-08 04:51:20.759812 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.99s 2026-02-08 04:51:20.759820 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.68s 2026-02-08 04:51:20.759829 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.14s 2026-02-08 04:51:20.759838 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.69s 2026-02-08 04:51:20.759846 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.32s 2026-02-08 04:51:20.759855 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.30s 2026-02-08 04:51:20.759863 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.27s 2026-02-08 04:51:20.759890 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.78s 2026-02-08 04:51:20.759921 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.67s 2026-02-08 04:51:20.759930 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.38s 2026-02-08 04:51:20.759939 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2026-02-08 04:51:20.759947 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2026-02-08 04:51:20.759956 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.11s 2026-02-08 04:51:20.759964 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.08s 2026-02-08 04:51:20.759973 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.94s 2026-02-08 04:51:20.759982 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.83s 2026-02-08 04:51:20.759990 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2026-02-08 04:51:20.759998 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.73s 2026-02-08 04:51:20.760007 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.70s 2026-02-08 04:51:21.160662 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-08 04:51:21.165941 | orchestrator | + set -e 2026-02-08 04:51:21.166005 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 04:51:21.166725 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 04:51:21.166740 | orchestrator | ++ INTERACTIVE=false 2026-02-08 04:51:21.166747 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 04:51:21.166814 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 04:51:21.166824 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 04:51:21.167875 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 04:51:21.167905 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 04:51:21.167912 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 04:51:21.167919 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 04:51:21.167925 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 04:51:21.167933 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 04:51:21.167940 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 04:51:21.167952 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 04:51:21.168379 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 04:51:21.168398 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 04:51:21.168406 | orchestrator | ++ export ARA=false 2026-02-08 04:51:21.168412 | orchestrator | ++ ARA=false 2026-02-08 04:51:21.168419 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 04:51:21.168426 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 04:51:21.168433 | orchestrator | ++ export TEMPEST=false 2026-02-08 04:51:21.168439 | orchestrator | ++ TEMPEST=false 2026-02-08 04:51:21.168446 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 04:51:21.168452 | orchestrator | ++ IS_ZUUL=true 2026-02-08 04:51:21.168459 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 04:51:21.168465 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 04:51:21.168471 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 04:51:21.168478 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 04:51:21.168485 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 04:51:21.168491 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 04:51:21.168497 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 04:51:21.168504 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 04:51:21.168510 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 04:51:21.168517 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 04:51:21.168751 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-08 04:51:21.229448 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 04:51:21.229546 | orchestrator | + osism apply clusterapi 2026-02-08 04:51:23.367297 | orchestrator | 2026-02-08 04:51:23 | INFO  | Task f35eb51d-07fd-4dc1-bc61-850edc0b6756 (clusterapi) was prepared for execution. 2026-02-08 04:51:23.367427 | orchestrator | 2026-02-08 04:51:23 | INFO  | It takes a moment until task f35eb51d-07fd-4dc1-bc61-850edc0b6756 (clusterapi) has been started and output is visible here. 2026-02-08 04:52:23.368569 | orchestrator | 2026-02-08 04:52:23.368686 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-08 04:52:23.368696 | orchestrator | 2026-02-08 04:52:23.368701 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-08 04:52:23.368725 | orchestrator | Sunday 08 February 2026 04:51:27 +0000 (0:00:00.192) 0:00:00.192 ******* 2026-02-08 04:52:23.368730 | orchestrator | included: cert_manager for testbed-manager 2026-02-08 04:52:23.368735 | orchestrator | 2026-02-08 04:52:23.368739 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-08 04:52:23.368743 | orchestrator | Sunday 08 February 2026 04:51:28 +0000 (0:00:00.309) 0:00:00.502 ******* 2026-02-08 04:52:23.368748 | orchestrator | changed: [testbed-manager] 2026-02-08 04:52:23.368753 | orchestrator | 2026-02-08 04:52:23.368757 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-08 04:52:23.368761 | orchestrator | Sunday 08 February 2026 04:51:33 +0000 (0:00:05.624) 0:00:06.126 ******* 2026-02-08 04:52:23.368775 | orchestrator | changed: [testbed-manager] 2026-02-08 04:52:23.368779 | orchestrator | 2026-02-08 04:52:23.368790 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-08 04:52:23.368794 | orchestrator | 2026-02-08 04:52:23.368798 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-08 04:52:23.368802 | orchestrator | Sunday 08 February 2026 04:52:01 +0000 (0:00:28.000) 0:00:34.127 ******* 2026-02-08 04:52:23.368806 | orchestrator | ok: [testbed-manager] 2026-02-08 04:52:23.368811 | orchestrator | 2026-02-08 04:52:23.368815 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-08 04:52:23.368819 | orchestrator | Sunday 08 February 2026 04:52:03 +0000 (0:00:01.160) 0:00:35.288 ******* 2026-02-08 04:52:23.368823 | orchestrator | ok: [testbed-manager] 2026-02-08 04:52:23.368827 | orchestrator | 2026-02-08 04:52:23.368832 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-08 04:52:23.368836 | orchestrator | Sunday 08 February 2026 04:52:03 +0000 (0:00:00.163) 0:00:35.451 ******* 2026-02-08 04:52:23.368840 | orchestrator | ok: [testbed-manager] 2026-02-08 04:52:23.368844 | orchestrator | 2026-02-08 04:52:23.368848 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-08 04:52:23.368852 | orchestrator | Sunday 08 February 2026 04:52:20 +0000 (0:00:17.256) 0:00:52.708 ******* 2026-02-08 04:52:23.368856 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:52:23.368861 | orchestrator | 2026-02-08 04:52:23.368865 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-08 04:52:23.368869 | orchestrator | Sunday 08 February 2026 04:52:20 +0000 (0:00:00.163) 0:00:52.871 ******* 2026-02-08 04:52:23.368883 | orchestrator | changed: [testbed-manager] 2026-02-08 04:52:23.368887 | orchestrator | 2026-02-08 04:52:23.368892 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:52:23.368897 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 04:52:23.368901 | orchestrator | 2026-02-08 04:52:23.368906 | orchestrator | 2026-02-08 04:52:23.368910 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:52:23.368914 | orchestrator | Sunday 08 February 2026 04:52:22 +0000 (0:00:02.299) 0:00:55.171 ******* 2026-02-08 04:52:23.368918 | orchestrator | =============================================================================== 2026-02-08 04:52:23.368922 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 28.00s 2026-02-08 04:52:23.368926 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.26s 2026-02-08 04:52:23.368930 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.62s 2026-02-08 04:52:23.368934 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.30s 2026-02-08 04:52:23.368939 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.16s 2026-02-08 04:52:23.368943 | orchestrator | Include cert_manager role ----------------------------------------------- 0.31s 2026-02-08 04:52:23.368947 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.16s 2026-02-08 04:52:23.368951 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.16s 2026-02-08 04:52:23.769466 | orchestrator | + osism apply magnum 2026-02-08 04:52:26.007100 | orchestrator | 2026-02-08 04:52:26 | INFO  | Task e4715cbe-065d-41d8-ab3a-75f0520dc9de (magnum) was prepared for execution. 2026-02-08 04:52:26.007192 | orchestrator | 2026-02-08 04:52:26 | INFO  | It takes a moment until task e4715cbe-065d-41d8-ab3a-75f0520dc9de (magnum) has been started and output is visible here. 2026-02-08 04:53:07.956704 | orchestrator | 2026-02-08 04:53:07.956805 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 04:53:07.956824 | orchestrator | 2026-02-08 04:53:07.956839 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 04:53:07.956854 | orchestrator | Sunday 08 February 2026 04:52:30 +0000 (0:00:00.319) 0:00:00.319 ******* 2026-02-08 04:53:07.956870 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:53:07.956885 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:53:07.956894 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:53:07.956902 | orchestrator | 2026-02-08 04:53:07.956911 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 04:53:07.956920 | orchestrator | Sunday 08 February 2026 04:52:30 +0000 (0:00:00.326) 0:00:00.645 ******* 2026-02-08 04:53:07.956928 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-08 04:53:07.956936 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-08 04:53:07.956944 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-08 04:53:07.956952 | orchestrator | 2026-02-08 04:53:07.956960 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-08 04:53:07.956968 | orchestrator | 2026-02-08 04:53:07.956976 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-08 04:53:07.956984 | orchestrator | Sunday 08 February 2026 04:52:31 +0000 (0:00:00.484) 0:00:01.130 ******* 2026-02-08 04:53:07.956992 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:53:07.957002 | orchestrator | 2026-02-08 04:53:07.957010 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-08 04:53:07.957018 | orchestrator | Sunday 08 February 2026 04:52:31 +0000 (0:00:00.608) 0:00:01.739 ******* 2026-02-08 04:53:07.957026 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-08 04:53:07.957085 | orchestrator | 2026-02-08 04:53:07.957094 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-08 04:53:07.957103 | orchestrator | Sunday 08 February 2026 04:52:35 +0000 (0:00:03.445) 0:00:05.184 ******* 2026-02-08 04:53:07.957111 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-08 04:53:07.957119 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-08 04:53:07.957127 | orchestrator | 2026-02-08 04:53:07.957135 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-08 04:53:07.957143 | orchestrator | Sunday 08 February 2026 04:52:41 +0000 (0:00:06.284) 0:00:11.469 ******* 2026-02-08 04:53:07.957151 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-08 04:53:07.957160 | orchestrator | 2026-02-08 04:53:07.957168 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-08 04:53:07.957176 | orchestrator | Sunday 08 February 2026 04:52:45 +0000 (0:00:03.338) 0:00:14.807 ******* 2026-02-08 04:53:07.957184 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-08 04:53:07.957192 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-08 04:53:07.957200 | orchestrator | 2026-02-08 04:53:07.957208 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-08 04:53:07.957227 | orchestrator | Sunday 08 February 2026 04:52:48 +0000 (0:00:03.900) 0:00:18.708 ******* 2026-02-08 04:53:07.957237 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-08 04:53:07.957250 | orchestrator | 2026-02-08 04:53:07.957292 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-08 04:53:07.957308 | orchestrator | Sunday 08 February 2026 04:52:52 +0000 (0:00:03.143) 0:00:21.852 ******* 2026-02-08 04:53:07.957322 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-08 04:53:07.957333 | orchestrator | 2026-02-08 04:53:07.957364 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-08 04:53:07.957379 | orchestrator | Sunday 08 February 2026 04:52:55 +0000 (0:00:03.781) 0:00:25.634 ******* 2026-02-08 04:53:07.957394 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:53:07.957408 | orchestrator | 2026-02-08 04:53:07.957422 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-08 04:53:07.957435 | orchestrator | Sunday 08 February 2026 04:52:59 +0000 (0:00:03.222) 0:00:28.857 ******* 2026-02-08 04:53:07.957448 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:53:07.957461 | orchestrator | 2026-02-08 04:53:07.957474 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-08 04:53:07.957487 | orchestrator | Sunday 08 February 2026 04:53:02 +0000 (0:00:03.837) 0:00:32.694 ******* 2026-02-08 04:53:07.957500 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:53:07.957513 | orchestrator | 2026-02-08 04:53:07.957525 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-08 04:53:07.957538 | orchestrator | Sunday 08 February 2026 04:53:06 +0000 (0:00:03.404) 0:00:36.098 ******* 2026-02-08 04:53:07.957578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:07.957596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:07.957615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:07.957654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:07.957694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:07.957719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:15.404472 | orchestrator | 2026-02-08 04:53:15.404572 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-08 04:53:15.404587 | orchestrator | Sunday 08 February 2026 04:53:07 +0000 (0:00:01.612) 0:00:37.711 ******* 2026-02-08 04:53:15.404596 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:53:15.404604 | orchestrator | 2026-02-08 04:53:15.404612 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-08 04:53:15.404619 | orchestrator | Sunday 08 February 2026 04:53:08 +0000 (0:00:00.159) 0:00:37.871 ******* 2026-02-08 04:53:15.404627 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:53:15.404635 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:53:15.404642 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:53:15.404649 | orchestrator | 2026-02-08 04:53:15.404657 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-08 04:53:15.404665 | orchestrator | Sunday 08 February 2026 04:53:08 +0000 (0:00:00.320) 0:00:38.192 ******* 2026-02-08 04:53:15.404673 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 04:53:15.404681 | orchestrator | 2026-02-08 04:53:15.404689 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-08 04:53:15.404697 | orchestrator | Sunday 08 February 2026 04:53:09 +0000 (0:00:00.940) 0:00:39.132 ******* 2026-02-08 04:53:15.404708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:15.404755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:15.404762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:15.404784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:15.404793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:15.404806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:15.404813 | orchestrator | 2026-02-08 04:53:15.404821 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-08 04:53:15.404829 | orchestrator | Sunday 08 February 2026 04:53:11 +0000 (0:00:02.414) 0:00:41.546 ******* 2026-02-08 04:53:15.404837 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:53:15.404846 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:53:15.404853 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:53:15.404861 | orchestrator | 2026-02-08 04:53:15.404868 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-08 04:53:15.404876 | orchestrator | Sunday 08 February 2026 04:53:12 +0000 (0:00:00.602) 0:00:42.148 ******* 2026-02-08 04:53:15.404884 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 04:53:15.404892 | orchestrator | 2026-02-08 04:53:15.404899 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-08 04:53:15.404911 | orchestrator | Sunday 08 February 2026 04:53:13 +0000 (0:00:00.643) 0:00:42.792 ******* 2026-02-08 04:53:15.404919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:15.404933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:16.383762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:16.383913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:16.383955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:16.383974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:16.383990 | orchestrator | 2026-02-08 04:53:16.384008 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-08 04:53:16.384083 | orchestrator | Sunday 08 February 2026 04:53:15 +0000 (0:00:02.378) 0:00:45.171 ******* 2026-02-08 04:53:16.384127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 04:53:16.384158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:53:16.384173 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:53:16.384191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 04:53:16.384216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:53:16.384234 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:53:16.384251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 04:53:16.384279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:53:19.855085 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:53:19.855198 | orchestrator | 2026-02-08 04:53:19.855213 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-08 04:53:19.855224 | orchestrator | Sunday 08 February 2026 04:53:16 +0000 (0:00:00.976) 0:00:46.147 ******* 2026-02-08 04:53:19.855234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 04:53:19.855246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:53:19.855255 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:53:19.855279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 04:53:19.855288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:53:19.855313 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:53:19.855339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 04:53:19.855348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:53:19.855356 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:53:19.855365 | orchestrator | 2026-02-08 04:53:19.855373 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-08 04:53:19.855381 | orchestrator | Sunday 08 February 2026 04:53:17 +0000 (0:00:00.925) 0:00:47.073 ******* 2026-02-08 04:53:19.855394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:19.855403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:19.855424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:26.042311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:26.042455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:26.042506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:26.042529 | orchestrator | 2026-02-08 04:53:26.042550 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-08 04:53:26.042571 | orchestrator | Sunday 08 February 2026 04:53:19 +0000 (0:00:02.545) 0:00:49.619 ******* 2026-02-08 04:53:26.042591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:26.042670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:26.042695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:26.042717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:26.042737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:26.042748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:26.042767 | orchestrator | 2026-02-08 04:53:26.042779 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-08 04:53:26.042790 | orchestrator | Sunday 08 February 2026 04:53:25 +0000 (0:00:05.423) 0:00:55.043 ******* 2026-02-08 04:53:26.042810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 04:53:27.830300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:53:27.830426 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:53:27.830481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 04:53:27.830499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:53:27.830539 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:53:27.830552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-08 04:53:27.830594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 04:53:27.830613 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:53:27.830632 | orchestrator | 2026-02-08 04:53:27.830653 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-08 04:53:27.830674 | orchestrator | Sunday 08 February 2026 04:53:26 +0000 (0:00:00.771) 0:00:55.814 ******* 2026-02-08 04:53:27.830694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:27.830724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:27.830748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-08 04:53:27.830759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:53:27.830781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:54:17.955225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-08 04:54:17.955302 | orchestrator | 2026-02-08 04:54:17.955309 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-08 04:54:17.955314 | orchestrator | Sunday 08 February 2026 04:53:27 +0000 (0:00:01.779) 0:00:57.594 ******* 2026-02-08 04:54:17.955318 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:54:17.955326 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:54:17.955332 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:54:17.955338 | orchestrator | 2026-02-08 04:54:17.955344 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-08 04:54:17.955374 | orchestrator | Sunday 08 February 2026 04:53:28 +0000 (0:00:00.560) 0:00:58.155 ******* 2026-02-08 04:54:17.955379 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:54:17.955392 | orchestrator | 2026-02-08 04:54:17.955397 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-08 04:54:17.955400 | orchestrator | Sunday 08 February 2026 04:53:30 +0000 (0:00:01.848) 0:01:00.004 ******* 2026-02-08 04:54:17.955404 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:54:17.955408 | orchestrator | 2026-02-08 04:54:17.955411 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-08 04:54:17.955415 | orchestrator | Sunday 08 February 2026 04:53:32 +0000 (0:00:01.956) 0:01:01.960 ******* 2026-02-08 04:54:17.955419 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:54:17.955423 | orchestrator | 2026-02-08 04:54:17.955426 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-08 04:54:17.955430 | orchestrator | Sunday 08 February 2026 04:53:47 +0000 (0:00:15.503) 0:01:17.464 ******* 2026-02-08 04:54:17.955434 | orchestrator | 2026-02-08 04:54:17.955437 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-08 04:54:17.955441 | orchestrator | Sunday 08 February 2026 04:53:47 +0000 (0:00:00.076) 0:01:17.540 ******* 2026-02-08 04:54:17.955445 | orchestrator | 2026-02-08 04:54:17.955448 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-08 04:54:17.955452 | orchestrator | Sunday 08 February 2026 04:53:47 +0000 (0:00:00.075) 0:01:17.616 ******* 2026-02-08 04:54:17.955456 | orchestrator | 2026-02-08 04:54:17.955459 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-08 04:54:17.955463 | orchestrator | Sunday 08 February 2026 04:53:47 +0000 (0:00:00.074) 0:01:17.691 ******* 2026-02-08 04:54:17.955467 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:54:17.955471 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:54:17.955474 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:54:17.955478 | orchestrator | 2026-02-08 04:54:17.955482 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-08 04:54:17.955486 | orchestrator | Sunday 08 February 2026 04:54:06 +0000 (0:00:18.926) 0:01:36.617 ******* 2026-02-08 04:54:17.955489 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:54:17.955493 | orchestrator | changed: [testbed-node-1] 2026-02-08 04:54:17.955497 | orchestrator | changed: [testbed-node-2] 2026-02-08 04:54:17.955500 | orchestrator | 2026-02-08 04:54:17.955504 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:54:17.955521 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 04:54:17.955526 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:54:17.955530 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-08 04:54:17.955534 | orchestrator | 2026-02-08 04:54:17.955538 | orchestrator | 2026-02-08 04:54:17.955541 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:54:17.955545 | orchestrator | Sunday 08 February 2026 04:54:17 +0000 (0:00:10.703) 0:01:47.321 ******* 2026-02-08 04:54:17.955549 | orchestrator | =============================================================================== 2026-02-08 04:54:17.955553 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.93s 2026-02-08 04:54:17.955557 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.50s 2026-02-08 04:54:17.955560 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.70s 2026-02-08 04:54:17.955564 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.28s 2026-02-08 04:54:17.955568 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.42s 2026-02-08 04:54:17.955575 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.90s 2026-02-08 04:54:17.955579 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.84s 2026-02-08 04:54:17.955594 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.78s 2026-02-08 04:54:17.955599 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.45s 2026-02-08 04:54:17.955603 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.40s 2026-02-08 04:54:17.955606 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.34s 2026-02-08 04:54:17.955610 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.22s 2026-02-08 04:54:17.955614 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.14s 2026-02-08 04:54:17.955618 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.55s 2026-02-08 04:54:17.955621 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.41s 2026-02-08 04:54:17.955625 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.38s 2026-02-08 04:54:17.955629 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 1.96s 2026-02-08 04:54:17.955632 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.85s 2026-02-08 04:54:17.955636 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.78s 2026-02-08 04:54:17.955640 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.61s 2026-02-08 04:54:18.725029 | orchestrator | ok: Runtime: 1:42:21.990638 2026-02-08 04:54:18.982156 | 2026-02-08 04:54:18.982383 | TASK [Deploy in a nutshell] 2026-02-08 04:54:19.518945 | orchestrator | skipping: Conditional result was False 2026-02-08 04:54:19.544493 | 2026-02-08 04:54:19.544668 | TASK [Bootstrap services] 2026-02-08 04:54:20.253691 | orchestrator | 2026-02-08 04:54:20.253853 | orchestrator | # BOOTSTRAP 2026-02-08 04:54:20.253874 | orchestrator | 2026-02-08 04:54:20.253884 | orchestrator | + set -e 2026-02-08 04:54:20.253895 | orchestrator | + echo 2026-02-08 04:54:20.253907 | orchestrator | + echo '# BOOTSTRAP' 2026-02-08 04:54:20.253922 | orchestrator | + echo 2026-02-08 04:54:20.253954 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-08 04:54:20.263915 | orchestrator | + set -e 2026-02-08 04:54:20.264015 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-08 04:54:22.561196 | orchestrator | 2026-02-08 04:54:22 | INFO  | It takes a moment until task dbebb0d4-1ede-47a5-a66d-bd60516dab4b (flavor-manager) has been started and output is visible here. 2026-02-08 04:54:30.524059 | orchestrator | 2026-02-08 04:54:26 | INFO  | Flavor SCS-1L-1 created 2026-02-08 04:54:30.524150 | orchestrator | 2026-02-08 04:54:26 | INFO  | Flavor SCS-1L-1-5 created 2026-02-08 04:54:30.524158 | orchestrator | 2026-02-08 04:54:26 | INFO  | Flavor SCS-1V-2 created 2026-02-08 04:54:30.524163 | orchestrator | 2026-02-08 04:54:27 | INFO  | Flavor SCS-1V-2-5 created 2026-02-08 04:54:30.524167 | orchestrator | 2026-02-08 04:54:27 | INFO  | Flavor SCS-1V-4 created 2026-02-08 04:54:30.524172 | orchestrator | 2026-02-08 04:54:27 | INFO  | Flavor SCS-1V-4-10 created 2026-02-08 04:54:30.524176 | orchestrator | 2026-02-08 04:54:27 | INFO  | Flavor SCS-1V-8 created 2026-02-08 04:54:30.524183 | orchestrator | 2026-02-08 04:54:27 | INFO  | Flavor SCS-1V-8-20 created 2026-02-08 04:54:30.524197 | orchestrator | 2026-02-08 04:54:27 | INFO  | Flavor SCS-2V-4 created 2026-02-08 04:54:30.524208 | orchestrator | 2026-02-08 04:54:27 | INFO  | Flavor SCS-2V-4-10 created 2026-02-08 04:54:30.524216 | orchestrator | 2026-02-08 04:54:27 | INFO  | Flavor SCS-2V-8 created 2026-02-08 04:54:30.524222 | orchestrator | 2026-02-08 04:54:28 | INFO  | Flavor SCS-2V-8-20 created 2026-02-08 04:54:30.524228 | orchestrator | 2026-02-08 04:54:28 | INFO  | Flavor SCS-2V-16 created 2026-02-08 04:54:30.524234 | orchestrator | 2026-02-08 04:54:28 | INFO  | Flavor SCS-2V-16-50 created 2026-02-08 04:54:30.524240 | orchestrator | 2026-02-08 04:54:28 | INFO  | Flavor SCS-4V-8 created 2026-02-08 04:54:30.524247 | orchestrator | 2026-02-08 04:54:28 | INFO  | Flavor SCS-4V-8-20 created 2026-02-08 04:54:30.524254 | orchestrator | 2026-02-08 04:54:28 | INFO  | Flavor SCS-4V-16 created 2026-02-08 04:54:30.524261 | orchestrator | 2026-02-08 04:54:28 | INFO  | Flavor SCS-4V-16-50 created 2026-02-08 04:54:30.524267 | orchestrator | 2026-02-08 04:54:29 | INFO  | Flavor SCS-4V-32 created 2026-02-08 04:54:30.524275 | orchestrator | 2026-02-08 04:54:29 | INFO  | Flavor SCS-4V-32-100 created 2026-02-08 04:54:30.524279 | orchestrator | 2026-02-08 04:54:29 | INFO  | Flavor SCS-8V-16 created 2026-02-08 04:54:30.524283 | orchestrator | 2026-02-08 04:54:29 | INFO  | Flavor SCS-8V-16-50 created 2026-02-08 04:54:30.524287 | orchestrator | 2026-02-08 04:54:29 | INFO  | Flavor SCS-8V-32 created 2026-02-08 04:54:30.524291 | orchestrator | 2026-02-08 04:54:29 | INFO  | Flavor SCS-8V-32-100 created 2026-02-08 04:54:30.524295 | orchestrator | 2026-02-08 04:54:29 | INFO  | Flavor SCS-16V-32 created 2026-02-08 04:54:30.524299 | orchestrator | 2026-02-08 04:54:29 | INFO  | Flavor SCS-16V-32-100 created 2026-02-08 04:54:30.524303 | orchestrator | 2026-02-08 04:54:30 | INFO  | Flavor SCS-2V-4-20s created 2026-02-08 04:54:30.524307 | orchestrator | 2026-02-08 04:54:30 | INFO  | Flavor SCS-4V-8-50s created 2026-02-08 04:54:30.524310 | orchestrator | 2026-02-08 04:54:30 | INFO  | Flavor SCS-8V-32-100s created 2026-02-08 04:54:33.104609 | orchestrator | 2026-02-08 04:54:33 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-08 04:54:43.201649 | orchestrator | 2026-02-08 04:54:43 | INFO  | Task feacc91b-187a-420f-9558-899cb4538772 (bootstrap-basic) was prepared for execution. 2026-02-08 04:54:43.201782 | orchestrator | 2026-02-08 04:54:43 | INFO  | It takes a moment until task feacc91b-187a-420f-9558-899cb4538772 (bootstrap-basic) has been started and output is visible here. 2026-02-08 04:55:29.518690 | orchestrator | 2026-02-08 04:55:29.518807 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-08 04:55:29.518825 | orchestrator | 2026-02-08 04:55:29.518837 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 04:55:29.518848 | orchestrator | Sunday 08 February 2026 04:54:48 +0000 (0:00:00.079) 0:00:00.079 ******* 2026-02-08 04:55:29.518861 | orchestrator | ok: [localhost] 2026-02-08 04:55:29.518874 | orchestrator | 2026-02-08 04:55:29.518884 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-08 04:55:29.518896 | orchestrator | Sunday 08 February 2026 04:54:50 +0000 (0:00:02.083) 0:00:02.163 ******* 2026-02-08 04:55:29.518986 | orchestrator | ok: [localhost] 2026-02-08 04:55:29.518999 | orchestrator | 2026-02-08 04:55:29.519011 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-08 04:55:29.519022 | orchestrator | Sunday 08 February 2026 04:54:58 +0000 (0:00:07.693) 0:00:09.857 ******* 2026-02-08 04:55:29.519034 | orchestrator | changed: [localhost] 2026-02-08 04:55:29.519047 | orchestrator | 2026-02-08 04:55:29.519058 | orchestrator | TASK [Create public network] *************************************************** 2026-02-08 04:55:29.519070 | orchestrator | Sunday 08 February 2026 04:55:04 +0000 (0:00:06.714) 0:00:16.571 ******* 2026-02-08 04:55:29.519079 | orchestrator | changed: [localhost] 2026-02-08 04:55:29.519086 | orchestrator | 2026-02-08 04:55:29.519093 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-08 04:55:29.519100 | orchestrator | Sunday 08 February 2026 04:55:10 +0000 (0:00:05.470) 0:00:22.041 ******* 2026-02-08 04:55:29.519111 | orchestrator | changed: [localhost] 2026-02-08 04:55:29.519118 | orchestrator | 2026-02-08 04:55:29.519125 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-08 04:55:29.519132 | orchestrator | Sunday 08 February 2026 04:55:16 +0000 (0:00:06.769) 0:00:28.811 ******* 2026-02-08 04:55:29.519139 | orchestrator | changed: [localhost] 2026-02-08 04:55:29.519145 | orchestrator | 2026-02-08 04:55:29.519152 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-08 04:55:29.519159 | orchestrator | Sunday 08 February 2026 04:55:21 +0000 (0:00:04.436) 0:00:33.248 ******* 2026-02-08 04:55:29.519166 | orchestrator | changed: [localhost] 2026-02-08 04:55:29.519172 | orchestrator | 2026-02-08 04:55:29.519179 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-08 04:55:29.519196 | orchestrator | Sunday 08 February 2026 04:55:25 +0000 (0:00:04.130) 0:00:37.378 ******* 2026-02-08 04:55:29.519203 | orchestrator | ok: [localhost] 2026-02-08 04:55:29.519210 | orchestrator | 2026-02-08 04:55:29.519218 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:55:29.519226 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 04:55:29.519235 | orchestrator | 2026-02-08 04:55:29.519243 | orchestrator | 2026-02-08 04:55:29.519251 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:55:29.519259 | orchestrator | Sunday 08 February 2026 04:55:29 +0000 (0:00:03.653) 0:00:41.031 ******* 2026-02-08 04:55:29.519267 | orchestrator | =============================================================================== 2026-02-08 04:55:29.519275 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.69s 2026-02-08 04:55:29.519283 | orchestrator | Set public network to default ------------------------------------------- 6.77s 2026-02-08 04:55:29.519292 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.71s 2026-02-08 04:55:29.519299 | orchestrator | Create public network --------------------------------------------------- 5.47s 2026-02-08 04:55:29.519329 | orchestrator | Create public subnet ---------------------------------------------------- 4.44s 2026-02-08 04:55:29.519337 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.13s 2026-02-08 04:55:29.519345 | orchestrator | Create manager role ----------------------------------------------------- 3.65s 2026-02-08 04:55:29.519353 | orchestrator | Gathering Facts --------------------------------------------------------- 2.08s 2026-02-08 04:55:32.227740 | orchestrator | 2026-02-08 04:55:32 | INFO  | It takes a moment until task 8dedd954-e831-4aad-b77f-4f6141cc7c00 (image-manager) has been started and output is visible here. 2026-02-08 04:56:16.978772 | orchestrator | 2026-02-08 04:55:34 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-08 04:56:16.978963 | orchestrator | 2026-02-08 04:55:35 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-08 04:56:16.978978 | orchestrator | 2026-02-08 04:55:35 | INFO  | Importing image Cirros 0.6.2 2026-02-08 04:56:16.978986 | orchestrator | 2026-02-08 04:55:35 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-08 04:56:16.978994 | orchestrator | 2026-02-08 04:55:37 | INFO  | Waiting for image to leave queued state... 2026-02-08 04:56:16.979003 | orchestrator | 2026-02-08 04:55:41 | INFO  | Waiting for import to complete... 2026-02-08 04:56:16.979010 | orchestrator | 2026-02-08 04:55:51 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-08 04:56:16.979018 | orchestrator | 2026-02-08 04:55:51 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-08 04:56:16.979025 | orchestrator | 2026-02-08 04:55:51 | INFO  | Setting internal_version = 0.6.2 2026-02-08 04:56:16.979033 | orchestrator | 2026-02-08 04:55:51 | INFO  | Setting image_original_user = cirros 2026-02-08 04:56:16.979041 | orchestrator | 2026-02-08 04:55:51 | INFO  | Adding tag os:cirros 2026-02-08 04:56:16.979048 | orchestrator | 2026-02-08 04:55:51 | INFO  | Setting property architecture: x86_64 2026-02-08 04:56:16.979056 | orchestrator | 2026-02-08 04:55:52 | INFO  | Setting property hw_disk_bus: scsi 2026-02-08 04:56:16.979063 | orchestrator | 2026-02-08 04:55:52 | INFO  | Setting property hw_rng_model: virtio 2026-02-08 04:56:16.979070 | orchestrator | 2026-02-08 04:55:52 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-08 04:56:16.979077 | orchestrator | 2026-02-08 04:55:53 | INFO  | Setting property hw_watchdog_action: reset 2026-02-08 04:56:16.979084 | orchestrator | 2026-02-08 04:55:53 | INFO  | Setting property hypervisor_type: qemu 2026-02-08 04:56:16.979091 | orchestrator | 2026-02-08 04:55:53 | INFO  | Setting property os_distro: cirros 2026-02-08 04:56:16.979097 | orchestrator | 2026-02-08 04:55:53 | INFO  | Setting property os_purpose: minimal 2026-02-08 04:56:16.979104 | orchestrator | 2026-02-08 04:55:54 | INFO  | Setting property replace_frequency: never 2026-02-08 04:56:16.979111 | orchestrator | 2026-02-08 04:55:54 | INFO  | Setting property uuid_validity: none 2026-02-08 04:56:16.979118 | orchestrator | 2026-02-08 04:55:54 | INFO  | Setting property provided_until: none 2026-02-08 04:56:16.979124 | orchestrator | 2026-02-08 04:55:54 | INFO  | Setting property image_description: Cirros 2026-02-08 04:56:16.979131 | orchestrator | 2026-02-08 04:55:55 | INFO  | Setting property image_name: Cirros 2026-02-08 04:56:16.979138 | orchestrator | 2026-02-08 04:55:55 | INFO  | Setting property internal_version: 0.6.2 2026-02-08 04:56:16.979145 | orchestrator | 2026-02-08 04:55:55 | INFO  | Setting property image_original_user: cirros 2026-02-08 04:56:16.979178 | orchestrator | 2026-02-08 04:55:55 | INFO  | Setting property os_version: 0.6.2 2026-02-08 04:56:16.979196 | orchestrator | 2026-02-08 04:55:56 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-08 04:56:16.979205 | orchestrator | 2026-02-08 04:55:56 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-08 04:56:16.979211 | orchestrator | 2026-02-08 04:55:56 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-08 04:56:16.979218 | orchestrator | 2026-02-08 04:55:56 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-08 04:56:16.979224 | orchestrator | 2026-02-08 04:55:56 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-08 04:56:16.979231 | orchestrator | 2026-02-08 04:55:56 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-08 04:56:16.979242 | orchestrator | 2026-02-08 04:55:57 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-08 04:56:16.979249 | orchestrator | 2026-02-08 04:55:57 | INFO  | Importing image Cirros 0.6.3 2026-02-08 04:56:16.979256 | orchestrator | 2026-02-08 04:55:57 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-08 04:56:16.979262 | orchestrator | 2026-02-08 04:55:58 | INFO  | Waiting for image to leave queued state... 2026-02-08 04:56:16.979268 | orchestrator | 2026-02-08 04:56:00 | INFO  | Waiting for import to complete... 2026-02-08 04:56:16.979298 | orchestrator | 2026-02-08 04:56:10 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-08 04:56:16.979306 | orchestrator | 2026-02-08 04:56:11 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-08 04:56:16.979314 | orchestrator | 2026-02-08 04:56:11 | INFO  | Setting internal_version = 0.6.3 2026-02-08 04:56:16.979320 | orchestrator | 2026-02-08 04:56:11 | INFO  | Setting image_original_user = cirros 2026-02-08 04:56:16.979326 | orchestrator | 2026-02-08 04:56:11 | INFO  | Adding tag os:cirros 2026-02-08 04:56:16.979332 | orchestrator | 2026-02-08 04:56:11 | INFO  | Setting property architecture: x86_64 2026-02-08 04:56:16.979338 | orchestrator | 2026-02-08 04:56:11 | INFO  | Setting property hw_disk_bus: scsi 2026-02-08 04:56:16.979344 | orchestrator | 2026-02-08 04:56:12 | INFO  | Setting property hw_rng_model: virtio 2026-02-08 04:56:16.979350 | orchestrator | 2026-02-08 04:56:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-08 04:56:16.979356 | orchestrator | 2026-02-08 04:56:12 | INFO  | Setting property hw_watchdog_action: reset 2026-02-08 04:56:16.979362 | orchestrator | 2026-02-08 04:56:12 | INFO  | Setting property hypervisor_type: qemu 2026-02-08 04:56:16.979369 | orchestrator | 2026-02-08 04:56:13 | INFO  | Setting property os_distro: cirros 2026-02-08 04:56:16.979375 | orchestrator | 2026-02-08 04:56:13 | INFO  | Setting property os_purpose: minimal 2026-02-08 04:56:16.979381 | orchestrator | 2026-02-08 04:56:13 | INFO  | Setting property replace_frequency: never 2026-02-08 04:56:16.979387 | orchestrator | 2026-02-08 04:56:13 | INFO  | Setting property uuid_validity: none 2026-02-08 04:56:16.979392 | orchestrator | 2026-02-08 04:56:14 | INFO  | Setting property provided_until: none 2026-02-08 04:56:16.979398 | orchestrator | 2026-02-08 04:56:14 | INFO  | Setting property image_description: Cirros 2026-02-08 04:56:16.979404 | orchestrator | 2026-02-08 04:56:14 | INFO  | Setting property image_name: Cirros 2026-02-08 04:56:16.979411 | orchestrator | 2026-02-08 04:56:14 | INFO  | Setting property internal_version: 0.6.3 2026-02-08 04:56:16.979424 | orchestrator | 2026-02-08 04:56:15 | INFO  | Setting property image_original_user: cirros 2026-02-08 04:56:16.979430 | orchestrator | 2026-02-08 04:56:15 | INFO  | Setting property os_version: 0.6.3 2026-02-08 04:56:16.979436 | orchestrator | 2026-02-08 04:56:15 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-08 04:56:16.979442 | orchestrator | 2026-02-08 04:56:15 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-08 04:56:16.979448 | orchestrator | 2026-02-08 04:56:16 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-08 04:56:16.979454 | orchestrator | 2026-02-08 04:56:16 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-08 04:56:16.979461 | orchestrator | 2026-02-08 04:56:16 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-08 04:56:17.417271 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-08 04:56:20.150588 | orchestrator | 2026-02-08 04:56:20 | INFO  | date: 2026-02-08 2026-02-08 04:56:20.150710 | orchestrator | 2026-02-08 04:56:20 | INFO  | image: octavia-amphora-haproxy-2024.2.20260208.qcow2 2026-02-08 04:56:20.150755 | orchestrator | 2026-02-08 04:56:20 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260208.qcow2 2026-02-08 04:56:20.150770 | orchestrator | 2026-02-08 04:56:20 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260208.qcow2.CHECKSUM 2026-02-08 04:56:20.676914 | orchestrator | 2026-02-08 04:56:20 | INFO  | checksum: c54f97feec6815d93c05e25f32663766b4c5aac199ddf832e2ae81966289f839 2026-02-08 04:56:20.766127 | orchestrator | 2026-02-08 04:56:20 | INFO  | It takes a moment until task ff2216da-8d49-461a-9a9f-57a6e278ef02 (image-manager) has been started and output is visible here. 2026-02-08 04:57:42.861455 | orchestrator | 2026-02-08 04:56:23 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-08' 2026-02-08 04:57:42.861545 | orchestrator | 2026-02-08 04:56:23 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260208.qcow2: 200 2026-02-08 04:57:42.861555 | orchestrator | 2026-02-08 04:56:23 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-08 2026-02-08 04:57:42.861561 | orchestrator | 2026-02-08 04:56:23 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260208.qcow2 2026-02-08 04:57:42.861566 | orchestrator | 2026-02-08 04:56:24 | INFO  | Waiting for image to leave queued state... 2026-02-08 04:57:42.861571 | orchestrator | 2026-02-08 04:56:27 | INFO  | Waiting for import to complete... 2026-02-08 04:57:42.861576 | orchestrator | 2026-02-08 04:56:37 | INFO  | Waiting for import to complete... 2026-02-08 04:57:42.861581 | orchestrator | 2026-02-08 04:56:47 | INFO  | Waiting for import to complete... 2026-02-08 04:57:42.861585 | orchestrator | 2026-02-08 04:56:57 | INFO  | Waiting for import to complete... 2026-02-08 04:57:42.861591 | orchestrator | 2026-02-08 04:57:07 | INFO  | Waiting for import to complete... 2026-02-08 04:57:42.861596 | orchestrator | 2026-02-08 04:57:17 | INFO  | Waiting for import to complete... 2026-02-08 04:57:42.861600 | orchestrator | 2026-02-08 04:57:27 | INFO  | Waiting for import to complete... 2026-02-08 04:57:42.861605 | orchestrator | 2026-02-08 04:57:37 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-08' successfully completed, reloading images 2026-02-08 04:57:42.861610 | orchestrator | 2026-02-08 04:57:38 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-08' 2026-02-08 04:57:42.861632 | orchestrator | 2026-02-08 04:57:38 | INFO  | Setting internal_version = 2026-02-08 2026-02-08 04:57:42.861637 | orchestrator | 2026-02-08 04:57:38 | INFO  | Setting image_original_user = ubuntu 2026-02-08 04:57:42.861641 | orchestrator | 2026-02-08 04:57:38 | INFO  | Adding tag amphora 2026-02-08 04:57:42.861646 | orchestrator | 2026-02-08 04:57:38 | INFO  | Adding tag os:ubuntu 2026-02-08 04:57:42.861650 | orchestrator | 2026-02-08 04:57:38 | INFO  | Setting property architecture: x86_64 2026-02-08 04:57:42.861654 | orchestrator | 2026-02-08 04:57:38 | INFO  | Setting property hw_disk_bus: scsi 2026-02-08 04:57:42.861659 | orchestrator | 2026-02-08 04:57:39 | INFO  | Setting property hw_rng_model: virtio 2026-02-08 04:57:42.861663 | orchestrator | 2026-02-08 04:57:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-08 04:57:42.861667 | orchestrator | 2026-02-08 04:57:39 | INFO  | Setting property hw_watchdog_action: reset 2026-02-08 04:57:42.861671 | orchestrator | 2026-02-08 04:57:39 | INFO  | Setting property hypervisor_type: qemu 2026-02-08 04:57:42.861677 | orchestrator | 2026-02-08 04:57:39 | INFO  | Setting property os_distro: ubuntu 2026-02-08 04:57:42.861684 | orchestrator | 2026-02-08 04:57:40 | INFO  | Setting property replace_frequency: quarterly 2026-02-08 04:57:42.861691 | orchestrator | 2026-02-08 04:57:40 | INFO  | Setting property uuid_validity: last-1 2026-02-08 04:57:42.861698 | orchestrator | 2026-02-08 04:57:40 | INFO  | Setting property provided_until: none 2026-02-08 04:57:42.861705 | orchestrator | 2026-02-08 04:57:40 | INFO  | Setting property os_purpose: network 2026-02-08 04:57:42.861724 | orchestrator | 2026-02-08 04:57:41 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-08 04:57:42.861731 | orchestrator | 2026-02-08 04:57:41 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-08 04:57:42.861737 | orchestrator | 2026-02-08 04:57:41 | INFO  | Setting property internal_version: 2026-02-08 2026-02-08 04:57:42.861743 | orchestrator | 2026-02-08 04:57:41 | INFO  | Setting property image_original_user: ubuntu 2026-02-08 04:57:42.861750 | orchestrator | 2026-02-08 04:57:41 | INFO  | Setting property os_version: 2026-02-08 2026-02-08 04:57:42.861756 | orchestrator | 2026-02-08 04:57:42 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260208.qcow2 2026-02-08 04:57:42.861763 | orchestrator | 2026-02-08 04:57:42 | INFO  | Setting property image_build_date: 2026-02-08 2026-02-08 04:57:42.861769 | orchestrator | 2026-02-08 04:57:42 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-08' 2026-02-08 04:57:42.861857 | orchestrator | 2026-02-08 04:57:42 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-08' 2026-02-08 04:57:42.861868 | orchestrator | 2026-02-08 04:57:42 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-08 04:57:42.861874 | orchestrator | 2026-02-08 04:57:42 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-08 04:57:42.861882 | orchestrator | 2026-02-08 04:57:42 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-08 04:57:42.861889 | orchestrator | 2026-02-08 04:57:42 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-08 04:57:43.736673 | orchestrator | ok: Runtime: 0:03:23.421507 2026-02-08 04:57:43.760394 | 2026-02-08 04:57:43.760630 | TASK [Run checks] 2026-02-08 04:57:44.507140 | orchestrator | + set -e 2026-02-08 04:57:44.507391 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 04:57:44.507411 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 04:57:44.507424 | orchestrator | ++ INTERACTIVE=false 2026-02-08 04:57:44.507433 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 04:57:44.507441 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 04:57:44.507450 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-08 04:57:44.508335 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-08 04:57:44.512473 | orchestrator | 2026-02-08 04:57:44.512537 | orchestrator | # CHECK 2026-02-08 04:57:44.512548 | orchestrator | 2026-02-08 04:57:44.512556 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 04:57:44.512567 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 04:57:44.512574 | orchestrator | + echo 2026-02-08 04:57:44.512581 | orchestrator | + echo '# CHECK' 2026-02-08 04:57:44.512588 | orchestrator | + echo 2026-02-08 04:57:44.512601 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-08 04:57:44.512615 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-08 04:57:44.558279 | orchestrator | 2026-02-08 04:57:44.558380 | orchestrator | ## Containers @ testbed-manager 2026-02-08 04:57:44.558391 | orchestrator | 2026-02-08 04:57:44.558400 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-08 04:57:44.558406 | orchestrator | + echo 2026-02-08 04:57:44.558413 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-08 04:57:44.558420 | orchestrator | + echo 2026-02-08 04:57:44.558427 | orchestrator | + osism container testbed-manager ps 2026-02-08 04:57:46.815765 | orchestrator | 2026-02-08 04:57:46 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-08 04:57:47.219100 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-08 04:57:47.219214 | orchestrator | 001845aab149 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-08 04:57:47.219231 | orchestrator | 62a92fb8c51a registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-08 04:57:47.219240 | orchestrator | e2d07b416629 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-08 04:57:47.219247 | orchestrator | a18eb9321248 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-08 04:57:47.219253 | orchestrator | de925332d1a5 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-08 04:57:47.219264 | orchestrator | 8685bcc0d433 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 58 minutes ago Up 58 minutes cephclient 2026-02-08 04:57:47.219270 | orchestrator | ee6f185be91b registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-08 04:57:47.219277 | orchestrator | 70a427d8f6e9 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-08 04:57:47.219318 | orchestrator | 4e65684b8ead registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-08 04:57:47.219326 | orchestrator | d7c0ea0a03b1 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-08 04:57:47.219333 | orchestrator | 5a5314010be4 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-08 04:57:47.219340 | orchestrator | 17f3c2eb7cdd registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-08 04:57:47.219347 | orchestrator | 479978cc4b79 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-08 04:57:47.219353 | orchestrator | 5e688c5e3b44 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-08 04:57:47.219360 | orchestrator | 45bcd2077e49 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-08 04:57:47.219373 | orchestrator | 2f6577739e72 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-08 04:57:47.219379 | orchestrator | 389966ad5974 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-08 04:57:47.219386 | orchestrator | 8de6fb87a352 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-08 04:57:47.219392 | orchestrator | a56137f1f5d5 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-08 04:57:47.219399 | orchestrator | 5ec60116876d registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-08 04:57:47.219406 | orchestrator | 62160d3dd5a1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-08 04:57:47.219412 | orchestrator | b729dd70845f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-08 04:57:47.219424 | orchestrator | addf47e18ee9 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-08 04:57:47.219439 | orchestrator | fe8d9a5453d8 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-08 04:57:47.219445 | orchestrator | 7d368828d1fb registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-08 04:57:47.219452 | orchestrator | bd64e850d36e registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-08 04:57:47.219459 | orchestrator | cb93dc8ae0cd registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-08 04:57:47.219465 | orchestrator | cd8251429804 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-08 04:57:47.219472 | orchestrator | 14b076be739c registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-08 04:57:47.219483 | orchestrator | 4d7207333a27 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-08 04:57:47.687173 | orchestrator | 2026-02-08 04:57:47.687306 | orchestrator | ## Images @ testbed-manager 2026-02-08 04:57:47.687333 | orchestrator | 2026-02-08 04:57:47.687353 | orchestrator | + echo 2026-02-08 04:57:47.687372 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-08 04:57:47.687392 | orchestrator | + echo 2026-02-08 04:57:47.687415 | orchestrator | + osism container testbed-manager images 2026-02-08 04:57:50.404019 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-08 04:57:50.404114 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 054f4f3700ba 25 hours ago 238MB 2026-02-08 04:57:50.404121 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 11 days ago 41.4MB 2026-02-08 04:57:50.404125 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-08 04:57:50.404130 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-08 04:57:50.404134 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-08 04:57:50.405428 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-08 04:57:50.405488 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-08 04:57:50.405497 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-08 04:57:50.405503 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-08 04:57:50.405530 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-08 04:57:50.405537 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-08 04:57:50.405544 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-08 04:57:50.405550 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-08 04:57:50.405556 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-08 04:57:50.405562 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-08 04:57:50.405573 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-08 04:57:50.405582 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-08 04:57:50.405614 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-08 04:57:50.405622 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 2 months ago 334MB 2026-02-08 04:57:50.405628 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-08 04:57:50.405635 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-08 04:57:50.405641 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-08 04:57:50.405649 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-08 04:57:50.405655 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-08 04:57:50.405662 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-08 04:57:50.816247 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-08 04:57:50.816743 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-08 04:57:50.872479 | orchestrator | 2026-02-08 04:57:50.872558 | orchestrator | ## Containers @ testbed-node-0 2026-02-08 04:57:50.872565 | orchestrator | 2026-02-08 04:57:50.872570 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-08 04:57:50.872574 | orchestrator | + echo 2026-02-08 04:57:50.872579 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-08 04:57:50.872584 | orchestrator | + echo 2026-02-08 04:57:50.872589 | orchestrator | + osism container testbed-node-0 ps 2026-02-08 04:57:53.375442 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-08 04:57:53.375537 | orchestrator | b7898f61780c registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-08 04:57:53.375565 | orchestrator | 455a1e3d0752 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-08 04:57:53.375576 | orchestrator | 845ca5acaee5 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-08 04:57:53.375586 | orchestrator | ba10836370f9 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-08 04:57:53.375616 | orchestrator | 7dbfb8be6944 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-08 04:57:53.375626 | orchestrator | f12e88f03510 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-08 04:57:53.375640 | orchestrator | 09bf979c179e registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-08 04:57:53.375649 | orchestrator | 037b4b38a92a registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-08 04:57:53.375658 | orchestrator | 981ffcb63f9e registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-08 04:57:53.375668 | orchestrator | 6f533d4d83e6 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-08 04:57:53.375683 | orchestrator | 60790e99f238 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-08 04:57:53.375698 | orchestrator | e24968c5778d registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-08 04:57:53.375712 | orchestrator | 8b9fd3fb22cf registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-08 04:57:53.375726 | orchestrator | ba41fb950899 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-08 04:57:53.375741 | orchestrator | 61d845462867 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-08 04:57:53.375755 | orchestrator | 09273d579443 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-08 04:57:53.375769 | orchestrator | 81b790007de5 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-08 04:57:53.375835 | orchestrator | 6028286f6498 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-08 04:57:53.375852 | orchestrator | fb1af4824607 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-08 04:57:53.375895 | orchestrator | cdf98f928386 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-08 04:57:53.375912 | orchestrator | a3c850ea345b registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-08 04:57:53.375926 | orchestrator | b617dc13cf1f registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-08 04:57:53.375954 | orchestrator | d22f59517a4e registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-08 04:57:53.375969 | orchestrator | 0c522aa5225d registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-08 04:57:53.375985 | orchestrator | 55bad4074abc registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-08 04:57:53.375999 | orchestrator | 028b7272ec9b registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-08 04:57:53.376008 | orchestrator | 4e95ae0f690d registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-08 04:57:53.376017 | orchestrator | c4ff940a9888 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-08 04:57:53.376026 | orchestrator | 8dfbdda78558 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-08 04:57:53.376035 | orchestrator | e533cc14cae3 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-08 04:57:53.376048 | orchestrator | 202327965bf9 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-08 04:57:53.376066 | orchestrator | 95e49a569554 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-08 04:57:53.376088 | orchestrator | 35756b88ae41 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-08 04:57:53.376102 | orchestrator | 1d3d59ba89ce registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-08 04:57:53.376117 | orchestrator | 76639f321d94 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-08 04:57:53.376131 | orchestrator | 5abe066f3558 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-08 04:57:53.376146 | orchestrator | d33aa3f42534 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-08 04:57:53.376160 | orchestrator | 8317a7d47552 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-08 04:57:53.376172 | orchestrator | 5836076e82c5 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-08 04:57:53.376193 | orchestrator | 3d09c079a570 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-08 04:57:53.376220 | orchestrator | 04da4def5cca registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-08 04:57:53.376234 | orchestrator | 125cc7470b28 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-08 04:57:53.376257 | orchestrator | 51e3f348a995 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-08 04:57:53.376272 | orchestrator | 7a6570bbe30d registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-08 04:57:53.376287 | orchestrator | d1f430de5b67 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-08 04:57:53.376302 | orchestrator | 99477994f69b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-08 04:57:53.376316 | orchestrator | dfbd4cd3069f registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-08 04:57:53.376332 | orchestrator | ffd42bb3847d registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-08 04:57:53.376341 | orchestrator | 98a69e4b4019 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-08 04:57:53.376350 | orchestrator | 53d946e776ca registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-0 2026-02-08 04:57:53.376359 | orchestrator | c7394c5c8887 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-08 04:57:53.376368 | orchestrator | 814c3ba0cfa5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-08 04:57:53.376377 | orchestrator | 32aa35748e33 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-08 04:57:53.376386 | orchestrator | a44cf05c890d registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-08 04:57:53.376395 | orchestrator | 6a338ea5deb5 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-08 04:57:53.376403 | orchestrator | 455a92682deb registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-08 04:57:53.376417 | orchestrator | bc8e6ebfebe4 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-08 04:57:53.376426 | orchestrator | 85515702e441 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-08 04:57:53.376442 | orchestrator | fbd405457c0c registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-08 04:57:53.376457 | orchestrator | 92ba5c59e897 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-08 04:57:53.376466 | orchestrator | f51ad70868f7 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-08 04:57:53.376475 | orchestrator | d83fd90dd881 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-08 04:57:53.376484 | orchestrator | 9b002cc227fb registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-08 04:57:53.376493 | orchestrator | 567cea24e3e7 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-08 04:57:53.376502 | orchestrator | 9501de91ef9c registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-08 04:57:53.376511 | orchestrator | f173bd5a3116 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-08 04:57:53.376520 | orchestrator | 9004de482011 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-08 04:57:53.376529 | orchestrator | 52f0b1cbdd66 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-08 04:57:53.376540 | orchestrator | 5d0a25572bbb registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-08 04:57:53.376550 | orchestrator | 16718b2e38f3 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-08 04:57:53.376560 | orchestrator | a11235eebc1d registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-08 04:57:53.743619 | orchestrator | 2026-02-08 04:57:53.743728 | orchestrator | ## Images @ testbed-node-0 2026-02-08 04:57:53.743745 | orchestrator | 2026-02-08 04:57:53.743758 | orchestrator | + echo 2026-02-08 04:57:53.743770 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-08 04:57:53.743838 | orchestrator | + echo 2026-02-08 04:57:53.743861 | orchestrator | + osism container testbed-node-0 images 2026-02-08 04:57:56.313968 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-08 04:57:56.314167 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-08 04:57:56.314191 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-08 04:57:56.314206 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-08 04:57:56.314220 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-08 04:57:56.314259 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-08 04:57:56.314275 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-08 04:57:56.314283 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-08 04:57:56.314292 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-08 04:57:56.314300 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-08 04:57:56.314308 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-08 04:57:56.314316 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-08 04:57:56.314324 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-08 04:57:56.314332 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-08 04:57:56.314340 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-08 04:57:56.314348 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-08 04:57:56.314356 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-08 04:57:56.314364 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-08 04:57:56.314372 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-08 04:57:56.314380 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-08 04:57:56.314388 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-08 04:57:56.314396 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-08 04:57:56.314404 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-08 04:57:56.314412 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-08 04:57:56.314419 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-08 04:57:56.314431 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-08 04:57:56.314449 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-08 04:57:56.314466 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-08 04:57:56.314487 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-08 04:57:56.314500 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-08 04:57:56.314512 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-08 04:57:56.314534 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-08 04:57:56.314568 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-08 04:57:56.314582 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-08 04:57:56.314596 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-08 04:57:56.314609 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-08 04:57:56.314620 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-08 04:57:56.314632 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-08 04:57:56.314644 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-08 04:57:56.314657 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-08 04:57:56.314669 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-08 04:57:56.314681 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-08 04:57:56.314695 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-08 04:57:56.314708 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-08 04:57:56.314722 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-08 04:57:56.314737 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-08 04:57:56.314751 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-08 04:57:56.314764 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-08 04:57:56.314804 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-08 04:57:56.314819 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-08 04:57:56.314832 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-08 04:57:56.314844 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-08 04:57:56.314857 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-08 04:57:56.314871 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-08 04:57:56.314885 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-08 04:57:56.314899 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-08 04:57:56.314913 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-08 04:57:56.314938 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-08 04:57:56.314947 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-08 04:57:56.315016 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-08 04:57:56.315027 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-08 04:57:56.315036 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-08 04:57:56.315044 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-08 04:57:56.315052 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-08 04:57:56.315072 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-08 04:57:56.315105 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-08 04:57:56.315113 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-08 04:57:56.315121 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-08 04:57:56.315129 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-08 04:57:56.315138 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-08 04:57:56.881169 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-08 04:57:56.882153 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-08 04:57:56.936646 | orchestrator | 2026-02-08 04:57:56.936749 | orchestrator | ## Containers @ testbed-node-1 2026-02-08 04:57:56.936770 | orchestrator | 2026-02-08 04:57:56.936831 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-08 04:57:56.936842 | orchestrator | + echo 2026-02-08 04:57:56.936854 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-08 04:57:56.936866 | orchestrator | + echo 2026-02-08 04:57:56.936878 | orchestrator | + osism container testbed-node-1 ps 2026-02-08 04:57:59.474326 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-08 04:57:59.474421 | orchestrator | 060d457802ff registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-08 04:57:59.474434 | orchestrator | 861be786436a registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-08 04:57:59.474445 | orchestrator | 8c27d8acc4dc registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-08 04:57:59.474454 | orchestrator | 4facad351b9d registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-08 04:57:59.474465 | orchestrator | b29db575ac98 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-08 04:57:59.474474 | orchestrator | a221be7b7c83 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-08 04:57:59.474503 | orchestrator | 583f2fc628e6 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-08 04:57:59.474512 | orchestrator | 8dd18379dd64 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-08 04:57:59.474521 | orchestrator | a660c3ada9b7 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-08 04:57:59.474531 | orchestrator | 4ad5e03a9264 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-08 04:57:59.474540 | orchestrator | 6df9c3e7235c registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-08 04:57:59.474549 | orchestrator | 6cff1bb1cc48 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-08 04:57:59.474573 | orchestrator | c08064bc5091 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-08 04:57:59.474586 | orchestrator | 381346297e69 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-08 04:57:59.474600 | orchestrator | 579c01ca5799 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-08 04:57:59.474622 | orchestrator | 235c2b004ef6 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-08 04:57:59.474640 | orchestrator | 4c60ca5c8126 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-08 04:57:59.474653 | orchestrator | 43f1ed510092 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-08 04:57:59.474669 | orchestrator | bda7dccaadcd registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-08 04:57:59.474702 | orchestrator | d953b0682ac9 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-08 04:57:59.474713 | orchestrator | c4b0276f73f2 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-08 04:57:59.474722 | orchestrator | 7ee97a619e1f registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-08 04:57:59.474731 | orchestrator | 3a0e772948a3 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-08 04:57:59.474739 | orchestrator | 1345f509eadd registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-08 04:57:59.474757 | orchestrator | 9daaca5281b6 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-08 04:57:59.474766 | orchestrator | a23a8ef6845f registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-08 04:57:59.474800 | orchestrator | 2335ca6cdefe registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-08 04:57:59.474813 | orchestrator | 68499cb33845 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-08 04:57:59.474822 | orchestrator | bde8e53397e2 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-08 04:57:59.474831 | orchestrator | 16529aafba2a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-08 04:57:59.474840 | orchestrator | b8b8f9331ee3 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-08 04:57:59.474849 | orchestrator | ac5fa5aaf399 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-08 04:57:59.474858 | orchestrator | 3fcdadc44c1a registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-08 04:57:59.474867 | orchestrator | 06150327c972 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-08 04:57:59.474876 | orchestrator | a28632034868 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-08 04:57:59.474887 | orchestrator | 0dd1c1278124 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-08 04:57:59.474904 | orchestrator | 074231e1e192 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-08 04:57:59.474914 | orchestrator | c2903f942c61 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-08 04:57:59.474980 | orchestrator | 08dd3bd1090c registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-08 04:57:59.475000 | orchestrator | b6ddf1c28a4a registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-08 04:57:59.475011 | orchestrator | c19c9b464a17 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-08 04:57:59.475028 | orchestrator | f92f856c1fbf registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-08 04:57:59.475039 | orchestrator | c32db8713f1f registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-08 04:57:59.475050 | orchestrator | 889aac486c15 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-08 04:57:59.475060 | orchestrator | aa63d0782b46 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-08 04:57:59.475071 | orchestrator | 8ed99de5cb98 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-08 04:57:59.475081 | orchestrator | f1815c9e45f7 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-08 04:57:59.475091 | orchestrator | b5e993ae44ca registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-08 04:57:59.475101 | orchestrator | 0c4ba96618dd registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-08 04:57:59.475111 | orchestrator | a5ef97625219 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-1 2026-02-08 04:57:59.475122 | orchestrator | 2270d2394ae8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-08 04:57:59.475139 | orchestrator | d108d94fad94 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-08 04:57:59.475153 | orchestrator | 533c15970f48 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-08 04:57:59.475206 | orchestrator | df24516c9b94 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-08 04:57:59.475224 | orchestrator | 058bdc9d3c86 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-08 04:57:59.475239 | orchestrator | 8ddbd8aaaaf8 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-08 04:57:59.475249 | orchestrator | e1206a24c0ab registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-08 04:57:59.475351 | orchestrator | e107f4633cf7 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-08 04:57:59.475370 | orchestrator | 4f5a3f3ad14d registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-08 04:57:59.475397 | orchestrator | 34dcc569130f registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-08 04:57:59.475408 | orchestrator | 0e68626f9d89 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-08 04:57:59.475416 | orchestrator | edc795540d69 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-08 04:57:59.475425 | orchestrator | 591b6bacc523 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-08 04:57:59.475434 | orchestrator | a24155ad39a1 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-08 04:57:59.475450 | orchestrator | ddc6c42b7591 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-08 04:57:59.475459 | orchestrator | d1140a807a91 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-08 04:57:59.475468 | orchestrator | 893d88463035 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-08 04:57:59.475476 | orchestrator | 13b39cdf85c8 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-08 04:57:59.475485 | orchestrator | 7f1011473cfa registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-08 04:57:59.475499 | orchestrator | 35b25de8905c registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-08 04:57:59.475508 | orchestrator | 3638ec5d0c3c registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-08 04:57:59.885188 | orchestrator | 2026-02-08 04:57:59.885288 | orchestrator | ## Images @ testbed-node-1 2026-02-08 04:57:59.885303 | orchestrator | 2026-02-08 04:57:59.885316 | orchestrator | + echo 2026-02-08 04:57:59.885327 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-08 04:57:59.885340 | orchestrator | + echo 2026-02-08 04:57:59.885351 | orchestrator | + osism container testbed-node-1 images 2026-02-08 04:58:02.562526 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-08 04:58:02.562634 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-08 04:58:02.562652 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-08 04:58:02.562661 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-08 04:58:02.562671 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-08 04:58:02.562680 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-08 04:58:02.562688 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-08 04:58:02.562722 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-08 04:58:02.562731 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-08 04:58:02.562739 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-08 04:58:02.562747 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-08 04:58:02.562755 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-08 04:58:02.562763 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-08 04:58:02.562771 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-08 04:58:02.562855 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-08 04:58:02.562864 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-08 04:58:02.562872 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-08 04:58:02.562880 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-08 04:58:02.562888 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-08 04:58:02.562896 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-08 04:58:02.562904 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-08 04:58:02.562912 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-08 04:58:02.562920 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-08 04:58:02.562928 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-08 04:58:02.562936 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-08 04:58:02.562944 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-08 04:58:02.562952 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-08 04:58:02.562960 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-08 04:58:02.562968 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-08 04:58:02.562976 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-08 04:58:02.562985 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-08 04:58:02.562993 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-08 04:58:02.563017 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-08 04:58:02.563033 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-08 04:58:02.563042 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-08 04:58:02.563050 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-08 04:58:02.563060 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-08 04:58:02.563069 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-08 04:58:02.563095 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-08 04:58:02.563105 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-08 04:58:02.563115 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-08 04:58:02.563125 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-08 04:58:02.563134 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-08 04:58:02.563143 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-08 04:58:02.563153 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-08 04:58:02.563162 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-08 04:58:02.563172 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-08 04:58:02.563181 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-08 04:58:02.563191 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-08 04:58:02.563201 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-08 04:58:02.563210 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-08 04:58:02.563219 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-08 04:58:02.563229 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-08 04:58:02.563238 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-08 04:58:02.563248 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-08 04:58:02.563257 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-08 04:58:02.563266 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-08 04:58:02.563276 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-08 04:58:02.563285 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-08 04:58:02.563295 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-08 04:58:02.563310 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-08 04:58:02.563319 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-08 04:58:02.563329 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-08 04:58:02.563339 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-08 04:58:02.563355 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-08 04:58:02.563365 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-08 04:58:02.563375 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-08 04:58:02.563384 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-08 04:58:02.563394 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-08 04:58:02.563402 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-08 04:58:03.005539 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-08 04:58:03.006123 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-08 04:58:03.063716 | orchestrator | 2026-02-08 04:58:03.063896 | orchestrator | ## Containers @ testbed-node-2 2026-02-08 04:58:03.063914 | orchestrator | 2026-02-08 04:58:03.063923 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-08 04:58:03.063931 | orchestrator | + echo 2026-02-08 04:58:03.063939 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-08 04:58:03.063949 | orchestrator | + echo 2026-02-08 04:58:03.063961 | orchestrator | + osism container testbed-node-2 ps 2026-02-08 04:58:05.618942 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-08 04:58:05.619034 | orchestrator | 966792c248c0 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-08 04:58:05.619045 | orchestrator | 836c7b6a6e18 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-08 04:58:05.619052 | orchestrator | cc2d132eb6bf registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-08 04:58:05.619092 | orchestrator | 52ceb26c5cc3 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-08 04:58:05.619101 | orchestrator | 5e493591a0a9 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-08 04:58:05.619108 | orchestrator | ebd58f55f791 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-08 04:58:05.619115 | orchestrator | d82d538ab928 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-08 04:58:05.619123 | orchestrator | 1bbbab60444f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-08 04:58:05.619148 | orchestrator | a162b1cdbdf7 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-08 04:58:05.619155 | orchestrator | 662c1a5a1798 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-08 04:58:05.619162 | orchestrator | 0b0621a51a51 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-08 04:58:05.619168 | orchestrator | 267add2c8242 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-08 04:58:05.619192 | orchestrator | 6aaa6b713b08 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-08 04:58:05.619204 | orchestrator | 0d9412c8abb9 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-08 04:58:05.619214 | orchestrator | 282e4283a338 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-08 04:58:05.619224 | orchestrator | 1b3b1ede0a36 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-08 04:58:05.619233 | orchestrator | c233af91639f registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-08 04:58:05.619243 | orchestrator | 928845589a7c registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-08 04:58:05.619253 | orchestrator | 05251b2f7dc5 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-08 04:58:05.619310 | orchestrator | 7cb7b9ceead7 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-08 04:58:05.619323 | orchestrator | ca71fb9c3780 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-08 04:58:05.619334 | orchestrator | ef8838e2c63f registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-08 04:58:05.619344 | orchestrator | eee4c6f58a1f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-08 04:58:05.619354 | orchestrator | 3a5456688a04 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-08 04:58:05.619364 | orchestrator | 4b7292c16bb1 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-08 04:58:05.619382 | orchestrator | f6c6c0fb8c85 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-08 04:58:05.619499 | orchestrator | c08c411e33ba registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-08 04:58:05.619516 | orchestrator | a2d446015e23 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-08 04:58:05.619527 | orchestrator | 6ca54d37e157 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-08 04:58:05.619534 | orchestrator | b862f26ffa79 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-08 04:58:05.619541 | orchestrator | 26f81a40eb3f registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-08 04:58:05.619547 | orchestrator | 9e7649616878 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-08 04:58:05.619554 | orchestrator | f9bd34086dea registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-08 04:58:05.619560 | orchestrator | 4aba66e5ebd2 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-08 04:58:05.619566 | orchestrator | 0633ecf3ce48 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-08 04:58:05.619573 | orchestrator | 92cae280bb36 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-08 04:58:05.619579 | orchestrator | bbd18a6d0043 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-08 04:58:05.619585 | orchestrator | 0ea5b4c9c3f4 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-08 04:58:05.619592 | orchestrator | bab925b39819 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-08 04:58:05.619598 | orchestrator | ad3a920fccd8 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-08 04:58:05.619605 | orchestrator | ceca96bb6a35 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-08 04:58:05.619611 | orchestrator | 082fcf7ff768 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-08 04:58:05.619617 | orchestrator | 6d98f71ec0a9 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-08 04:58:05.619633 | orchestrator | feb278d51e90 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-08 04:58:05.619640 | orchestrator | 20702fcc21ea registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 49 minutes ago Up 49 minutes (healthy) neutron_server 2026-02-08 04:58:05.619646 | orchestrator | 93d09202b942 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-08 04:58:05.619663 | orchestrator | 1964ef450e9d registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-08 04:58:05.619670 | orchestrator | d7818e49dbf4 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_fernet 2026-02-08 04:58:05.619677 | orchestrator | 0f2c46504478 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-08 04:58:05.619683 | orchestrator | 76231a064b3a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 57 minutes ago Up 57 minutes ceph-mgr-testbed-node-2 2026-02-08 04:58:05.619692 | orchestrator | 346f43dd260e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-08 04:58:05.619710 | orchestrator | 83b6b87b68f7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-08 04:58:05.619721 | orchestrator | 3d9698b5e943 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-08 04:58:05.619735 | orchestrator | 19a01cfe7324 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-08 04:58:05.619746 | orchestrator | cd42668769f0 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-08 04:58:05.619756 | orchestrator | 7d52c9fa70df registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-08 04:58:05.619768 | orchestrator | a933c0dfb54c registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-08 04:58:05.619797 | orchestrator | 00c7a7bc5d43 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-08 04:58:05.619808 | orchestrator | 9ee4373b5322 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-08 04:58:05.619818 | orchestrator | 97fdbf440e98 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-08 04:58:05.619827 | orchestrator | ba3c87da7cfa registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-08 04:58:05.619845 | orchestrator | 562db4ce78fd registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-08 04:58:05.619852 | orchestrator | 45756334c565 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-08 04:58:05.619859 | orchestrator | 806a47df13dc registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-08 04:58:05.619865 | orchestrator | 9b7a8624e28a registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-08 04:58:05.619871 | orchestrator | e17b5c976b36 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-08 04:58:05.619883 | orchestrator | a4cd7030ad04 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-08 04:58:05.619889 | orchestrator | 0d294a1bd905 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-08 04:58:05.619896 | orchestrator | 81d043c4d365 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-08 04:58:05.619903 | orchestrator | 4a46ae551627 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-08 04:58:05.619909 | orchestrator | 7a5d1ba5c4e2 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-08 04:58:05.973639 | orchestrator | 2026-02-08 04:58:05.973705 | orchestrator | ## Images @ testbed-node-2 2026-02-08 04:58:05.973712 | orchestrator | 2026-02-08 04:58:05.973724 | orchestrator | + echo 2026-02-08 04:58:05.973729 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-08 04:58:05.973734 | orchestrator | + echo 2026-02-08 04:58:05.973739 | orchestrator | + osism container testbed-node-2 images 2026-02-08 04:58:08.477631 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-08 04:58:08.477713 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-08 04:58:08.477720 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-08 04:58:08.477724 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-08 04:58:08.477740 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-08 04:58:08.477745 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-08 04:58:08.477749 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-08 04:58:08.477753 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-08 04:58:08.477757 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-08 04:58:08.477807 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-08 04:58:08.477812 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-08 04:58:08.477819 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-08 04:58:08.477823 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-08 04:58:08.477834 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-08 04:58:08.477839 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-08 04:58:08.477842 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-08 04:58:08.477846 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-08 04:58:08.477850 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-08 04:58:08.477854 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-08 04:58:08.477858 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-08 04:58:08.477862 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-08 04:58:08.477866 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-08 04:58:08.477869 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-08 04:58:08.477873 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-08 04:58:08.477877 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-08 04:58:08.477881 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-08 04:58:08.477885 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-08 04:58:08.477889 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-08 04:58:08.477893 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-08 04:58:08.477934 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-08 04:58:08.477939 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-08 04:58:08.477943 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-08 04:58:08.477947 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-08 04:58:08.477951 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-08 04:58:08.477955 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-08 04:58:08.477959 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-08 04:58:08.477968 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-08 04:58:08.477972 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-08 04:58:08.477976 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-08 04:58:08.477985 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-08 04:58:08.477989 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-08 04:58:08.477992 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-08 04:58:08.478390 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-08 04:58:08.478401 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-08 04:58:08.478405 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-08 04:58:08.478409 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-08 04:58:08.478413 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-08 04:58:08.478417 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-08 04:58:08.478421 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-08 04:58:08.478425 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-08 04:58:08.478429 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-08 04:58:08.478433 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-08 04:58:08.478437 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-08 04:58:08.478869 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-08 04:58:08.478889 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-08 04:58:08.478896 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-08 04:58:08.478902 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-08 04:58:08.478907 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-08 04:58:08.478913 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-08 04:58:08.479042 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-08 04:58:08.479052 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-08 04:58:08.479056 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-08 04:58:08.479218 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-08 04:58:08.479225 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-08 04:58:08.479229 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-08 04:58:08.479234 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-08 04:58:08.479237 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-08 04:58:08.479241 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-08 04:58:08.479251 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-08 04:58:08.479255 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-08 04:58:08.875936 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-08 04:58:08.882308 | orchestrator | + set -e 2026-02-08 04:58:08.882409 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 04:58:08.882431 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 04:58:08.882450 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 04:58:08.882468 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 04:58:08.882486 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 04:58:08.882506 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 04:58:08.882525 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 04:58:08.882543 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 04:58:08.882562 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 04:58:08.882581 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 04:58:08.882601 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 04:58:08.882621 | orchestrator | ++ export ARA=false 2026-02-08 04:58:08.882639 | orchestrator | ++ ARA=false 2026-02-08 04:58:08.882658 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 04:58:08.882677 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 04:58:08.882695 | orchestrator | ++ export TEMPEST=false 2026-02-08 04:58:08.882714 | orchestrator | ++ TEMPEST=false 2026-02-08 04:58:08.882734 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 04:58:08.882754 | orchestrator | ++ IS_ZUUL=true 2026-02-08 04:58:08.882807 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 04:58:08.882829 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 04:58:08.882849 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 04:58:08.882870 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 04:58:08.882892 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 04:58:08.882912 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 04:58:08.882933 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 04:58:08.882952 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 04:58:08.882970 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 04:58:08.882989 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 04:58:08.883007 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-08 04:58:08.883026 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-08 04:58:08.892698 | orchestrator | + set -e 2026-02-08 04:58:08.892855 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 04:58:08.892880 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 04:58:08.892900 | orchestrator | ++ INTERACTIVE=false 2026-02-08 04:58:08.892917 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 04:58:08.892935 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 04:58:08.892954 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-08 04:58:08.894004 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-08 04:58:08.902204 | orchestrator | 2026-02-08 04:58:08.902274 | orchestrator | # Ceph status 2026-02-08 04:58:08.902288 | orchestrator | 2026-02-08 04:58:08.902299 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 04:58:08.902312 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 04:58:08.902323 | orchestrator | + echo 2026-02-08 04:58:08.902335 | orchestrator | + echo '# Ceph status' 2026-02-08 04:58:08.902380 | orchestrator | + echo 2026-02-08 04:58:08.902392 | orchestrator | + ceph -s 2026-02-08 04:58:09.629314 | orchestrator | cluster: 2026-02-08 04:58:09.629418 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-08 04:58:09.629436 | orchestrator | health: HEALTH_OK 2026-02-08 04:58:09.629448 | orchestrator | 2026-02-08 04:58:09.629460 | orchestrator | services: 2026-02-08 04:58:09.629472 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 69m) 2026-02-08 04:58:09.629485 | orchestrator | mgr: testbed-node-2(active, since 56m), standbys: testbed-node-0, testbed-node-1 2026-02-08 04:58:09.629497 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-08 04:58:09.629509 | orchestrator | osd: 6 osds: 6 up (since 65m), 6 in (since 66m) 2026-02-08 04:58:09.629520 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-08 04:58:09.629532 | orchestrator | 2026-02-08 04:58:09.629543 | orchestrator | data: 2026-02-08 04:58:09.629556 | orchestrator | volumes: 1/1 healthy 2026-02-08 04:58:09.629567 | orchestrator | pools: 14 pools, 401 pgs 2026-02-08 04:58:09.629580 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-08 04:58:09.629588 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-02-08 04:58:09.629595 | orchestrator | pgs: 401 active+clean 2026-02-08 04:58:09.629602 | orchestrator | 2026-02-08 04:58:09.674256 | orchestrator | 2026-02-08 04:58:09.674372 | orchestrator | # Ceph versions 2026-02-08 04:58:09.674400 | orchestrator | 2026-02-08 04:58:09.674421 | orchestrator | + echo 2026-02-08 04:58:09.674443 | orchestrator | + echo '# Ceph versions' 2026-02-08 04:58:09.674463 | orchestrator | + echo 2026-02-08 04:58:09.674482 | orchestrator | + ceph versions 2026-02-08 04:58:10.315327 | orchestrator | { 2026-02-08 04:58:10.315419 | orchestrator | "mon": { 2026-02-08 04:58:10.315432 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-08 04:58:10.315442 | orchestrator | }, 2026-02-08 04:58:10.315451 | orchestrator | "mgr": { 2026-02-08 04:58:10.315459 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-08 04:58:10.315468 | orchestrator | }, 2026-02-08 04:58:10.315476 | orchestrator | "osd": { 2026-02-08 04:58:10.315485 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-08 04:58:10.315493 | orchestrator | }, 2026-02-08 04:58:10.315502 | orchestrator | "mds": { 2026-02-08 04:58:10.315510 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-08 04:58:10.315519 | orchestrator | }, 2026-02-08 04:58:10.315526 | orchestrator | "rgw": { 2026-02-08 04:58:10.315535 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-08 04:58:10.315543 | orchestrator | }, 2026-02-08 04:58:10.315551 | orchestrator | "overall": { 2026-02-08 04:58:10.315560 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-08 04:58:10.315569 | orchestrator | } 2026-02-08 04:58:10.315577 | orchestrator | } 2026-02-08 04:58:10.367405 | orchestrator | 2026-02-08 04:58:10.367472 | orchestrator | # Ceph OSD tree 2026-02-08 04:58:10.367479 | orchestrator | 2026-02-08 04:58:10.367484 | orchestrator | + echo 2026-02-08 04:58:10.367489 | orchestrator | + echo '# Ceph OSD tree' 2026-02-08 04:58:10.367495 | orchestrator | + echo 2026-02-08 04:58:10.367499 | orchestrator | + ceph osd df tree 2026-02-08 04:58:10.927263 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-08 04:58:10.927358 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 373 MiB 113 GiB 5.87 1.00 - root default 2026-02-08 04:58:10.927368 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-02-08 04:58:10.927376 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1016 MiB 955 MiB 1 KiB 62 MiB 19 GiB 4.97 0.85 189 up osd.0 2026-02-08 04:58:10.927383 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.77 1.15 201 up osd.3 2026-02-08 04:58:10.927390 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-4 2026-02-08 04:58:10.927397 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 62 MiB 19 GiB 5.36 0.91 195 up osd.1 2026-02-08 04:58:10.927424 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 66 MiB 19 GiB 6.40 1.09 197 up osd.5 2026-02-08 04:58:10.927432 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-5 2026-02-08 04:58:10.927440 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 62 MiB 19 GiB 7.08 1.21 198 up osd.2 2026-02-08 04:58:10.927447 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 952 MiB 891 MiB 1 KiB 62 MiB 19 GiB 4.65 0.79 190 up osd.4 2026-02-08 04:58:10.927453 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 373 MiB 113 GiB 5.87 2026-02-08 04:58:10.927460 | orchestrator | MIN/MAX VAR: 0.79/1.21 STDDEV: 0.92 2026-02-08 04:58:10.973542 | orchestrator | 2026-02-08 04:58:10.973611 | orchestrator | # Ceph monitor status 2026-02-08 04:58:10.973617 | orchestrator | 2026-02-08 04:58:10.973622 | orchestrator | + echo 2026-02-08 04:58:10.973626 | orchestrator | + echo '# Ceph monitor status' 2026-02-08 04:58:10.973631 | orchestrator | + echo 2026-02-08 04:58:10.973635 | orchestrator | + ceph mon stat 2026-02-08 04:58:11.577039 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-08 04:58:11.632624 | orchestrator | 2026-02-08 04:58:11.632706 | orchestrator | # Ceph quorum status 2026-02-08 04:58:11.632718 | orchestrator | 2026-02-08 04:58:11.632727 | orchestrator | + echo 2026-02-08 04:58:11.632736 | orchestrator | + echo '# Ceph quorum status' 2026-02-08 04:58:11.632744 | orchestrator | + echo 2026-02-08 04:58:11.633000 | orchestrator | + ceph quorum_status 2026-02-08 04:58:11.633292 | orchestrator | + jq 2026-02-08 04:58:12.284022 | orchestrator | { 2026-02-08 04:58:12.284121 | orchestrator | "election_epoch": 8, 2026-02-08 04:58:12.284138 | orchestrator | "quorum": [ 2026-02-08 04:58:12.284151 | orchestrator | 0, 2026-02-08 04:58:12.284163 | orchestrator | 1, 2026-02-08 04:58:12.284174 | orchestrator | 2 2026-02-08 04:58:12.284185 | orchestrator | ], 2026-02-08 04:58:12.284196 | orchestrator | "quorum_names": [ 2026-02-08 04:58:12.284207 | orchestrator | "testbed-node-0", 2026-02-08 04:58:12.284218 | orchestrator | "testbed-node-1", 2026-02-08 04:58:12.284229 | orchestrator | "testbed-node-2" 2026-02-08 04:58:12.284240 | orchestrator | ], 2026-02-08 04:58:12.284252 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-08 04:58:12.284264 | orchestrator | "quorum_age": 4166, 2026-02-08 04:58:12.284276 | orchestrator | "features": { 2026-02-08 04:58:12.284287 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-08 04:58:12.284298 | orchestrator | "quorum_mon": [ 2026-02-08 04:58:12.284309 | orchestrator | "kraken", 2026-02-08 04:58:12.284320 | orchestrator | "luminous", 2026-02-08 04:58:12.284331 | orchestrator | "mimic", 2026-02-08 04:58:12.284342 | orchestrator | "osdmap-prune", 2026-02-08 04:58:12.284353 | orchestrator | "nautilus", 2026-02-08 04:58:12.284364 | orchestrator | "octopus", 2026-02-08 04:58:12.284375 | orchestrator | "pacific", 2026-02-08 04:58:12.284386 | orchestrator | "elector-pinging", 2026-02-08 04:58:12.284397 | orchestrator | "quincy", 2026-02-08 04:58:12.284408 | orchestrator | "reef" 2026-02-08 04:58:12.284419 | orchestrator | ] 2026-02-08 04:58:12.284430 | orchestrator | }, 2026-02-08 04:58:12.284442 | orchestrator | "monmap": { 2026-02-08 04:58:12.284534 | orchestrator | "epoch": 1, 2026-02-08 04:58:12.284549 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-08 04:58:12.284563 | orchestrator | "modified": "2026-02-08T03:48:28.239426Z", 2026-02-08 04:58:12.284576 | orchestrator | "created": "2026-02-08T03:48:28.239426Z", 2026-02-08 04:58:12.284590 | orchestrator | "min_mon_release": 18, 2026-02-08 04:58:12.284602 | orchestrator | "min_mon_release_name": "reef", 2026-02-08 04:58:12.284615 | orchestrator | "election_strategy": 1, 2026-02-08 04:58:12.284628 | orchestrator | "disallowed_leaders: ": "", 2026-02-08 04:58:12.284641 | orchestrator | "stretch_mode": false, 2026-02-08 04:58:12.284654 | orchestrator | "tiebreaker_mon": "", 2026-02-08 04:58:12.284667 | orchestrator | "removed_ranks: ": "", 2026-02-08 04:58:12.284678 | orchestrator | "features": { 2026-02-08 04:58:12.284689 | orchestrator | "persistent": [ 2026-02-08 04:58:12.284700 | orchestrator | "kraken", 2026-02-08 04:58:12.284736 | orchestrator | "luminous", 2026-02-08 04:58:12.284747 | orchestrator | "mimic", 2026-02-08 04:58:12.284758 | orchestrator | "osdmap-prune", 2026-02-08 04:58:12.284831 | orchestrator | "nautilus", 2026-02-08 04:58:12.284843 | orchestrator | "octopus", 2026-02-08 04:58:12.284855 | orchestrator | "pacific", 2026-02-08 04:58:12.284866 | orchestrator | "elector-pinging", 2026-02-08 04:58:12.284876 | orchestrator | "quincy", 2026-02-08 04:58:12.284888 | orchestrator | "reef" 2026-02-08 04:58:12.284899 | orchestrator | ], 2026-02-08 04:58:12.284910 | orchestrator | "optional": [] 2026-02-08 04:58:12.284921 | orchestrator | }, 2026-02-08 04:58:12.284932 | orchestrator | "mons": [ 2026-02-08 04:58:12.284960 | orchestrator | { 2026-02-08 04:58:12.284972 | orchestrator | "rank": 0, 2026-02-08 04:58:12.284983 | orchestrator | "name": "testbed-node-0", 2026-02-08 04:58:12.284994 | orchestrator | "public_addrs": { 2026-02-08 04:58:12.285005 | orchestrator | "addrvec": [ 2026-02-08 04:58:12.285016 | orchestrator | { 2026-02-08 04:58:12.285028 | orchestrator | "type": "v2", 2026-02-08 04:58:12.285039 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-08 04:58:12.285051 | orchestrator | "nonce": 0 2026-02-08 04:58:12.285062 | orchestrator | }, 2026-02-08 04:58:12.285073 | orchestrator | { 2026-02-08 04:58:12.285084 | orchestrator | "type": "v1", 2026-02-08 04:58:12.285095 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-08 04:58:12.285106 | orchestrator | "nonce": 0 2026-02-08 04:58:12.285118 | orchestrator | } 2026-02-08 04:58:12.285129 | orchestrator | ] 2026-02-08 04:58:12.285139 | orchestrator | }, 2026-02-08 04:58:12.285151 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-08 04:58:12.285162 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-08 04:58:12.285173 | orchestrator | "priority": 0, 2026-02-08 04:58:12.285184 | orchestrator | "weight": 0, 2026-02-08 04:58:12.285195 | orchestrator | "crush_location": "{}" 2026-02-08 04:58:12.285206 | orchestrator | }, 2026-02-08 04:58:12.285217 | orchestrator | { 2026-02-08 04:58:12.285228 | orchestrator | "rank": 1, 2026-02-08 04:58:12.285239 | orchestrator | "name": "testbed-node-1", 2026-02-08 04:58:12.285250 | orchestrator | "public_addrs": { 2026-02-08 04:58:12.285261 | orchestrator | "addrvec": [ 2026-02-08 04:58:12.285272 | orchestrator | { 2026-02-08 04:58:12.285283 | orchestrator | "type": "v2", 2026-02-08 04:58:12.285294 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-08 04:58:12.285305 | orchestrator | "nonce": 0 2026-02-08 04:58:12.285316 | orchestrator | }, 2026-02-08 04:58:12.285327 | orchestrator | { 2026-02-08 04:58:12.285338 | orchestrator | "type": "v1", 2026-02-08 04:58:12.285349 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-08 04:58:12.285360 | orchestrator | "nonce": 0 2026-02-08 04:58:12.285372 | orchestrator | } 2026-02-08 04:58:12.285383 | orchestrator | ] 2026-02-08 04:58:12.285394 | orchestrator | }, 2026-02-08 04:58:12.285405 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-08 04:58:12.285416 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-08 04:58:12.285427 | orchestrator | "priority": 0, 2026-02-08 04:58:12.285438 | orchestrator | "weight": 0, 2026-02-08 04:58:12.285449 | orchestrator | "crush_location": "{}" 2026-02-08 04:58:12.285460 | orchestrator | }, 2026-02-08 04:58:12.285471 | orchestrator | { 2026-02-08 04:58:12.285482 | orchestrator | "rank": 2, 2026-02-08 04:58:12.285493 | orchestrator | "name": "testbed-node-2", 2026-02-08 04:58:12.285504 | orchestrator | "public_addrs": { 2026-02-08 04:58:12.285515 | orchestrator | "addrvec": [ 2026-02-08 04:58:12.285526 | orchestrator | { 2026-02-08 04:58:12.285537 | orchestrator | "type": "v2", 2026-02-08 04:58:12.285548 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-08 04:58:12.285559 | orchestrator | "nonce": 0 2026-02-08 04:58:12.285570 | orchestrator | }, 2026-02-08 04:58:12.285581 | orchestrator | { 2026-02-08 04:58:12.285592 | orchestrator | "type": "v1", 2026-02-08 04:58:12.285603 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-08 04:58:12.285615 | orchestrator | "nonce": 0 2026-02-08 04:58:12.285626 | orchestrator | } 2026-02-08 04:58:12.285637 | orchestrator | ] 2026-02-08 04:58:12.285647 | orchestrator | }, 2026-02-08 04:58:12.285658 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-08 04:58:12.285670 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-08 04:58:12.285681 | orchestrator | "priority": 0, 2026-02-08 04:58:12.285700 | orchestrator | "weight": 0, 2026-02-08 04:58:12.285711 | orchestrator | "crush_location": "{}" 2026-02-08 04:58:12.285723 | orchestrator | } 2026-02-08 04:58:12.285734 | orchestrator | ] 2026-02-08 04:58:12.285745 | orchestrator | } 2026-02-08 04:58:12.285756 | orchestrator | } 2026-02-08 04:58:12.285809 | orchestrator | 2026-02-08 04:58:12.285823 | orchestrator | # Ceph free space status 2026-02-08 04:58:12.285834 | orchestrator | 2026-02-08 04:58:12.285845 | orchestrator | + echo 2026-02-08 04:58:12.285856 | orchestrator | + echo '# Ceph free space status' 2026-02-08 04:58:12.285868 | orchestrator | + echo 2026-02-08 04:58:12.285879 | orchestrator | + ceph df 2026-02-08 04:58:12.863920 | orchestrator | --- RAW STORAGE --- 2026-02-08 04:58:12.863997 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-08 04:58:12.864015 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-02-08 04:58:12.864032 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-02-08 04:58:12.864038 | orchestrator | 2026-02-08 04:58:12.864044 | orchestrator | --- POOLS --- 2026-02-08 04:58:12.864050 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-08 04:58:12.864056 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-02-08 04:58:12.864062 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-08 04:58:12.864067 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-08 04:58:12.864073 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-08 04:58:12.864078 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-08 04:58:12.864084 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-08 04:58:12.864089 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-08 04:58:12.864094 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-08 04:58:12.864100 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-02-08 04:58:12.864105 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-08 04:58:12.864110 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-08 04:58:12.864115 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-02-08 04:58:12.864120 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-08 04:58:12.864126 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-08 04:58:12.913025 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-08 04:58:12.963110 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-08 04:58:12.963259 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-08 04:58:12.963286 | orchestrator | + osism apply facts 2026-02-08 04:58:15.264361 | orchestrator | 2026-02-08 04:58:15 | INFO  | Task 8ccbc039-930b-453f-be59-dd5b895844cf (facts) was prepared for execution. 2026-02-08 04:58:15.264460 | orchestrator | 2026-02-08 04:58:15 | INFO  | It takes a moment until task 8ccbc039-930b-453f-be59-dd5b895844cf (facts) has been started and output is visible here. 2026-02-08 04:58:30.787165 | orchestrator | 2026-02-08 04:58:30.787269 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-08 04:58:30.787285 | orchestrator | 2026-02-08 04:58:30.787298 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-08 04:58:30.787310 | orchestrator | Sunday 08 February 2026 04:58:20 +0000 (0:00:00.311) 0:00:00.311 ******* 2026-02-08 04:58:30.787321 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:58:30.787334 | orchestrator | ok: [testbed-manager] 2026-02-08 04:58:30.787345 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:58:30.787356 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:58:30.787367 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:58:30.787378 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:58:30.787389 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:58:30.787400 | orchestrator | 2026-02-08 04:58:30.787412 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-08 04:58:30.787455 | orchestrator | Sunday 08 February 2026 04:58:21 +0000 (0:00:01.290) 0:00:01.602 ******* 2026-02-08 04:58:30.787467 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:58:30.787479 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:58:30.787490 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:58:30.787501 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:58:30.787513 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:58:30.787524 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:58:30.787535 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:58:30.787546 | orchestrator | 2026-02-08 04:58:30.787557 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-08 04:58:30.787569 | orchestrator | 2026-02-08 04:58:30.787580 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-08 04:58:30.787591 | orchestrator | Sunday 08 February 2026 04:58:22 +0000 (0:00:01.536) 0:00:03.138 ******* 2026-02-08 04:58:30.787603 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:58:30.787615 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:58:30.787626 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:58:30.787637 | orchestrator | ok: [testbed-manager] 2026-02-08 04:58:30.787649 | orchestrator | ok: [testbed-node-3] 2026-02-08 04:58:30.787659 | orchestrator | ok: [testbed-node-4] 2026-02-08 04:58:30.787671 | orchestrator | ok: [testbed-node-5] 2026-02-08 04:58:30.787682 | orchestrator | 2026-02-08 04:58:30.787693 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-08 04:58:30.787705 | orchestrator | 2026-02-08 04:58:30.787717 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-08 04:58:30.787730 | orchestrator | Sunday 08 February 2026 04:58:29 +0000 (0:00:06.705) 0:00:09.843 ******* 2026-02-08 04:58:30.787741 | orchestrator | skipping: [testbed-manager] 2026-02-08 04:58:30.787771 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:58:30.787783 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:58:30.787795 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:58:30.787806 | orchestrator | skipping: [testbed-node-3] 2026-02-08 04:58:30.787817 | orchestrator | skipping: [testbed-node-4] 2026-02-08 04:58:30.787829 | orchestrator | skipping: [testbed-node-5] 2026-02-08 04:58:30.787840 | orchestrator | 2026-02-08 04:58:30.787852 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:58:30.787864 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:58:30.787878 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:58:30.787888 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:58:30.787914 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:58:30.787926 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:58:30.787937 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:58:30.787949 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:58:30.787961 | orchestrator | 2026-02-08 04:58:30.787973 | orchestrator | 2026-02-08 04:58:30.787984 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:58:30.788039 | orchestrator | Sunday 08 February 2026 04:58:30 +0000 (0:00:00.621) 0:00:10.465 ******* 2026-02-08 04:58:30.788054 | orchestrator | =============================================================================== 2026-02-08 04:58:30.788066 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.71s 2026-02-08 04:58:30.788088 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.54s 2026-02-08 04:58:30.788099 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.29s 2026-02-08 04:58:30.788111 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-02-08 04:58:31.189369 | orchestrator | + osism validate ceph-mons 2026-02-08 04:59:05.145302 | orchestrator | 2026-02-08 04:59:05.145404 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-08 04:59:05.145421 | orchestrator | 2026-02-08 04:59:05.145432 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-08 04:59:05.145442 | orchestrator | Sunday 08 February 2026 04:58:48 +0000 (0:00:00.503) 0:00:00.503 ******* 2026-02-08 04:59:05.145450 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:05.145456 | orchestrator | 2026-02-08 04:59:05.145462 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-08 04:59:05.145468 | orchestrator | Sunday 08 February 2026 04:58:49 +0000 (0:00:00.889) 0:00:01.392 ******* 2026-02-08 04:59:05.145477 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:05.145487 | orchestrator | 2026-02-08 04:59:05.145497 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-08 04:59:05.145506 | orchestrator | Sunday 08 February 2026 04:58:50 +0000 (0:00:01.085) 0:00:02.478 ******* 2026-02-08 04:59:05.145516 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.145527 | orchestrator | 2026-02-08 04:59:05.145537 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-08 04:59:05.145547 | orchestrator | Sunday 08 February 2026 04:58:50 +0000 (0:00:00.151) 0:00:02.629 ******* 2026-02-08 04:59:05.145557 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.145567 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:05.145573 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:05.145579 | orchestrator | 2026-02-08 04:59:05.145585 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-08 04:59:05.145591 | orchestrator | Sunday 08 February 2026 04:58:51 +0000 (0:00:00.309) 0:00:02.939 ******* 2026-02-08 04:59:05.145597 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:05.145603 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:05.145609 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.145617 | orchestrator | 2026-02-08 04:59:05.145626 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-08 04:59:05.145636 | orchestrator | Sunday 08 February 2026 04:58:52 +0000 (0:00:01.070) 0:00:04.009 ******* 2026-02-08 04:59:05.145645 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.145655 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:59:05.145665 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:59:05.145674 | orchestrator | 2026-02-08 04:59:05.145684 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-08 04:59:05.145692 | orchestrator | Sunday 08 February 2026 04:58:52 +0000 (0:00:00.321) 0:00:04.330 ******* 2026-02-08 04:59:05.145698 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.145704 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:05.145710 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:05.145716 | orchestrator | 2026-02-08 04:59:05.145722 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-08 04:59:05.145773 | orchestrator | Sunday 08 February 2026 04:58:53 +0000 (0:00:00.540) 0:00:04.871 ******* 2026-02-08 04:59:05.145780 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.145786 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:05.145792 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:05.145798 | orchestrator | 2026-02-08 04:59:05.145804 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-08 04:59:05.145810 | orchestrator | Sunday 08 February 2026 04:58:53 +0000 (0:00:00.361) 0:00:05.233 ******* 2026-02-08 04:59:05.145817 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.145854 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:59:05.145866 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:59:05.145876 | orchestrator | 2026-02-08 04:59:05.145886 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-08 04:59:05.145896 | orchestrator | Sunday 08 February 2026 04:58:53 +0000 (0:00:00.340) 0:00:05.574 ******* 2026-02-08 04:59:05.145908 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.145917 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:05.145928 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:05.145936 | orchestrator | 2026-02-08 04:59:05.145943 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-08 04:59:05.145950 | orchestrator | Sunday 08 February 2026 04:58:54 +0000 (0:00:00.527) 0:00:06.102 ******* 2026-02-08 04:59:05.145957 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.145964 | orchestrator | 2026-02-08 04:59:05.145971 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-08 04:59:05.145979 | orchestrator | Sunday 08 February 2026 04:58:54 +0000 (0:00:00.257) 0:00:06.359 ******* 2026-02-08 04:59:05.145986 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.145993 | orchestrator | 2026-02-08 04:59:05.146000 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-08 04:59:05.146007 | orchestrator | Sunday 08 February 2026 04:58:54 +0000 (0:00:00.265) 0:00:06.624 ******* 2026-02-08 04:59:05.146059 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.146070 | orchestrator | 2026-02-08 04:59:05.146080 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:05.146091 | orchestrator | Sunday 08 February 2026 04:58:55 +0000 (0:00:00.259) 0:00:06.884 ******* 2026-02-08 04:59:05.146101 | orchestrator | 2026-02-08 04:59:05.146110 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:05.146120 | orchestrator | Sunday 08 February 2026 04:58:55 +0000 (0:00:00.075) 0:00:06.959 ******* 2026-02-08 04:59:05.146131 | orchestrator | 2026-02-08 04:59:05.146142 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:05.146153 | orchestrator | Sunday 08 February 2026 04:58:55 +0000 (0:00:00.074) 0:00:07.034 ******* 2026-02-08 04:59:05.146162 | orchestrator | 2026-02-08 04:59:05.146173 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-08 04:59:05.146183 | orchestrator | Sunday 08 February 2026 04:58:55 +0000 (0:00:00.078) 0:00:07.113 ******* 2026-02-08 04:59:05.146193 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.146203 | orchestrator | 2026-02-08 04:59:05.146212 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-08 04:59:05.146248 | orchestrator | Sunday 08 February 2026 04:58:55 +0000 (0:00:00.340) 0:00:07.454 ******* 2026-02-08 04:59:05.146259 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.146269 | orchestrator | 2026-02-08 04:59:05.146298 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-08 04:59:05.146361 | orchestrator | Sunday 08 February 2026 04:58:55 +0000 (0:00:00.253) 0:00:07.707 ******* 2026-02-08 04:59:05.146372 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.146382 | orchestrator | 2026-02-08 04:59:05.146391 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-08 04:59:05.146401 | orchestrator | Sunday 08 February 2026 04:58:55 +0000 (0:00:00.131) 0:00:07.839 ******* 2026-02-08 04:59:05.146412 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:59:05.146426 | orchestrator | 2026-02-08 04:59:05.146436 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-08 04:59:05.146446 | orchestrator | Sunday 08 February 2026 04:58:57 +0000 (0:00:01.554) 0:00:09.393 ******* 2026-02-08 04:59:05.146455 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.146465 | orchestrator | 2026-02-08 04:59:05.146475 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-08 04:59:05.146485 | orchestrator | Sunday 08 February 2026 04:58:58 +0000 (0:00:00.605) 0:00:09.998 ******* 2026-02-08 04:59:05.146505 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.146514 | orchestrator | 2026-02-08 04:59:05.146523 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-08 04:59:05.146533 | orchestrator | Sunday 08 February 2026 04:58:58 +0000 (0:00:00.139) 0:00:10.138 ******* 2026-02-08 04:59:05.146541 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.146549 | orchestrator | 2026-02-08 04:59:05.146559 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-08 04:59:05.146568 | orchestrator | Sunday 08 February 2026 04:58:58 +0000 (0:00:00.355) 0:00:10.494 ******* 2026-02-08 04:59:05.146576 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.146586 | orchestrator | 2026-02-08 04:59:05.146595 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-08 04:59:05.146606 | orchestrator | Sunday 08 February 2026 04:58:59 +0000 (0:00:00.369) 0:00:10.864 ******* 2026-02-08 04:59:05.146615 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.146625 | orchestrator | 2026-02-08 04:59:05.146671 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-08 04:59:05.146683 | orchestrator | Sunday 08 February 2026 04:58:59 +0000 (0:00:00.161) 0:00:11.026 ******* 2026-02-08 04:59:05.146693 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.146702 | orchestrator | 2026-02-08 04:59:05.146712 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-08 04:59:05.146722 | orchestrator | Sunday 08 February 2026 04:58:59 +0000 (0:00:00.128) 0:00:11.154 ******* 2026-02-08 04:59:05.146753 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.146762 | orchestrator | 2026-02-08 04:59:05.146772 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-08 04:59:05.146783 | orchestrator | Sunday 08 February 2026 04:58:59 +0000 (0:00:00.137) 0:00:11.292 ******* 2026-02-08 04:59:05.146793 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:59:05.146800 | orchestrator | 2026-02-08 04:59:05.146806 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-08 04:59:05.146812 | orchestrator | Sunday 08 February 2026 04:59:00 +0000 (0:00:01.285) 0:00:12.577 ******* 2026-02-08 04:59:05.146818 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.146824 | orchestrator | 2026-02-08 04:59:05.146830 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-08 04:59:05.146836 | orchestrator | Sunday 08 February 2026 04:59:01 +0000 (0:00:00.375) 0:00:12.952 ******* 2026-02-08 04:59:05.146842 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.146848 | orchestrator | 2026-02-08 04:59:05.146854 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-08 04:59:05.146860 | orchestrator | Sunday 08 February 2026 04:59:01 +0000 (0:00:00.167) 0:00:13.120 ******* 2026-02-08 04:59:05.146866 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:05.146872 | orchestrator | 2026-02-08 04:59:05.146878 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-08 04:59:05.146884 | orchestrator | Sunday 08 February 2026 04:59:01 +0000 (0:00:00.138) 0:00:13.258 ******* 2026-02-08 04:59:05.146890 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.146895 | orchestrator | 2026-02-08 04:59:05.146901 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-08 04:59:05.146907 | orchestrator | Sunday 08 February 2026 04:59:01 +0000 (0:00:00.148) 0:00:13.407 ******* 2026-02-08 04:59:05.146920 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.146930 | orchestrator | 2026-02-08 04:59:05.146940 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-08 04:59:05.146949 | orchestrator | Sunday 08 February 2026 04:59:01 +0000 (0:00:00.375) 0:00:13.782 ******* 2026-02-08 04:59:05.146959 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:05.146968 | orchestrator | 2026-02-08 04:59:05.146978 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-08 04:59:05.146987 | orchestrator | Sunday 08 February 2026 04:59:02 +0000 (0:00:00.274) 0:00:14.057 ******* 2026-02-08 04:59:05.147006 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:05.147015 | orchestrator | 2026-02-08 04:59:05.147025 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-08 04:59:05.147034 | orchestrator | Sunday 08 February 2026 04:59:02 +0000 (0:00:00.262) 0:00:14.320 ******* 2026-02-08 04:59:05.147044 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:05.147054 | orchestrator | 2026-02-08 04:59:05.147064 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-08 04:59:05.147074 | orchestrator | Sunday 08 February 2026 04:59:04 +0000 (0:00:01.896) 0:00:16.216 ******* 2026-02-08 04:59:05.147083 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:05.147093 | orchestrator | 2026-02-08 04:59:05.147103 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-08 04:59:05.147112 | orchestrator | Sunday 08 February 2026 04:59:04 +0000 (0:00:00.281) 0:00:16.497 ******* 2026-02-08 04:59:05.147122 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:05.147131 | orchestrator | 2026-02-08 04:59:05.147151 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:08.169205 | orchestrator | Sunday 08 February 2026 04:59:04 +0000 (0:00:00.276) 0:00:16.774 ******* 2026-02-08 04:59:08.169305 | orchestrator | 2026-02-08 04:59:08.169318 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:08.169328 | orchestrator | Sunday 08 February 2026 04:59:04 +0000 (0:00:00.073) 0:00:16.847 ******* 2026-02-08 04:59:08.169337 | orchestrator | 2026-02-08 04:59:08.169348 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:08.169357 | orchestrator | Sunday 08 February 2026 04:59:05 +0000 (0:00:00.073) 0:00:16.921 ******* 2026-02-08 04:59:08.169366 | orchestrator | 2026-02-08 04:59:08.169376 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-08 04:59:08.169385 | orchestrator | Sunday 08 February 2026 04:59:05 +0000 (0:00:00.076) 0:00:16.997 ******* 2026-02-08 04:59:08.169395 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:08.169404 | orchestrator | 2026-02-08 04:59:08.169414 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-08 04:59:08.169422 | orchestrator | Sunday 08 February 2026 04:59:06 +0000 (0:00:01.637) 0:00:18.634 ******* 2026-02-08 04:59:08.169445 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-08 04:59:08.169469 | orchestrator |  "msg": [ 2026-02-08 04:59:08.169486 | orchestrator |  "Validator run completed.", 2026-02-08 04:59:08.169501 | orchestrator |  "You can find the report file here:", 2026-02-08 04:59:08.169516 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-08T04:58:49+00:00-report.json", 2026-02-08 04:59:08.169530 | orchestrator |  "on the following host:", 2026-02-08 04:59:08.169545 | orchestrator |  "testbed-manager" 2026-02-08 04:59:08.169559 | orchestrator |  ] 2026-02-08 04:59:08.169573 | orchestrator | } 2026-02-08 04:59:08.169587 | orchestrator | 2026-02-08 04:59:08.169600 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:59:08.169615 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-08 04:59:08.169631 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:59:08.169645 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:59:08.169659 | orchestrator | 2026-02-08 04:59:08.169671 | orchestrator | 2026-02-08 04:59:08.169685 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:59:08.169699 | orchestrator | Sunday 08 February 2026 04:59:07 +0000 (0:00:00.949) 0:00:19.584 ******* 2026-02-08 04:59:08.169771 | orchestrator | =============================================================================== 2026-02-08 04:59:08.169789 | orchestrator | Aggregate test results step one ----------------------------------------- 1.90s 2026-02-08 04:59:08.169806 | orchestrator | Write report file ------------------------------------------------------- 1.64s 2026-02-08 04:59:08.169820 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.55s 2026-02-08 04:59:08.169832 | orchestrator | Gather status data ------------------------------------------------------ 1.29s 2026-02-08 04:59:08.169843 | orchestrator | Create report output directory ------------------------------------------ 1.09s 2026-02-08 04:59:08.169853 | orchestrator | Get container info ------------------------------------------------------ 1.07s 2026-02-08 04:59:08.169864 | orchestrator | Print report file information ------------------------------------------- 0.95s 2026-02-08 04:59:08.169875 | orchestrator | Get timestamp for report file ------------------------------------------- 0.89s 2026-02-08 04:59:08.169886 | orchestrator | Set quorum test data ---------------------------------------------------- 0.61s 2026-02-08 04:59:08.169900 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-02-08 04:59:08.169933 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.53s 2026-02-08 04:59:08.169949 | orchestrator | Set health test data ---------------------------------------------------- 0.38s 2026-02-08 04:59:08.169962 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.38s 2026-02-08 04:59:08.169977 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.37s 2026-02-08 04:59:08.169993 | orchestrator | Prepare test data ------------------------------------------------------- 0.36s 2026-02-08 04:59:08.170008 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.36s 2026-02-08 04:59:08.170135 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.34s 2026-02-08 04:59:08.170148 | orchestrator | Print report file information ------------------------------------------- 0.34s 2026-02-08 04:59:08.170156 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-02-08 04:59:08.170165 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-02-08 04:59:08.524245 | orchestrator | + osism validate ceph-mgrs 2026-02-08 04:59:42.014165 | orchestrator | 2026-02-08 04:59:42.014269 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-08 04:59:42.014278 | orchestrator | 2026-02-08 04:59:42.014284 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-08 04:59:42.014289 | orchestrator | Sunday 08 February 2026 04:59:25 +0000 (0:00:00.549) 0:00:00.549 ******* 2026-02-08 04:59:42.014295 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:42.014299 | orchestrator | 2026-02-08 04:59:42.014304 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-08 04:59:42.014308 | orchestrator | Sunday 08 February 2026 04:59:26 +0000 (0:00:00.902) 0:00:01.451 ******* 2026-02-08 04:59:42.014321 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:42.014325 | orchestrator | 2026-02-08 04:59:42.014330 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-08 04:59:42.014334 | orchestrator | Sunday 08 February 2026 04:59:27 +0000 (0:00:01.023) 0:00:02.475 ******* 2026-02-08 04:59:42.014338 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014344 | orchestrator | 2026-02-08 04:59:42.014348 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-08 04:59:42.014352 | orchestrator | Sunday 08 February 2026 04:59:28 +0000 (0:00:00.138) 0:00:02.614 ******* 2026-02-08 04:59:42.014356 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014360 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:42.014364 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:42.014369 | orchestrator | 2026-02-08 04:59:42.014373 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-08 04:59:42.014377 | orchestrator | Sunday 08 February 2026 04:59:28 +0000 (0:00:00.305) 0:00:02.919 ******* 2026-02-08 04:59:42.014397 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:42.014402 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:42.014406 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014410 | orchestrator | 2026-02-08 04:59:42.014414 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-08 04:59:42.014418 | orchestrator | Sunday 08 February 2026 04:59:29 +0000 (0:00:01.056) 0:00:03.975 ******* 2026-02-08 04:59:42.014422 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:42.014426 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:59:42.014431 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:59:42.014435 | orchestrator | 2026-02-08 04:59:42.014439 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-08 04:59:42.014443 | orchestrator | Sunday 08 February 2026 04:59:29 +0000 (0:00:00.291) 0:00:04.267 ******* 2026-02-08 04:59:42.014448 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014452 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:42.014456 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:42.014460 | orchestrator | 2026-02-08 04:59:42.014465 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-08 04:59:42.014469 | orchestrator | Sunday 08 February 2026 04:59:30 +0000 (0:00:00.543) 0:00:04.810 ******* 2026-02-08 04:59:42.014473 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014477 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:42.014481 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:42.014485 | orchestrator | 2026-02-08 04:59:42.014489 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-08 04:59:42.014493 | orchestrator | Sunday 08 February 2026 04:59:30 +0000 (0:00:00.382) 0:00:05.192 ******* 2026-02-08 04:59:42.014497 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:42.014501 | orchestrator | skipping: [testbed-node-1] 2026-02-08 04:59:42.014505 | orchestrator | skipping: [testbed-node-2] 2026-02-08 04:59:42.014509 | orchestrator | 2026-02-08 04:59:42.014513 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-08 04:59:42.014517 | orchestrator | Sunday 08 February 2026 04:59:30 +0000 (0:00:00.350) 0:00:05.542 ******* 2026-02-08 04:59:42.014521 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014525 | orchestrator | ok: [testbed-node-1] 2026-02-08 04:59:42.014529 | orchestrator | ok: [testbed-node-2] 2026-02-08 04:59:42.014534 | orchestrator | 2026-02-08 04:59:42.014538 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-08 04:59:42.014542 | orchestrator | Sunday 08 February 2026 04:59:31 +0000 (0:00:00.575) 0:00:06.118 ******* 2026-02-08 04:59:42.014546 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:42.014550 | orchestrator | 2026-02-08 04:59:42.014554 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-08 04:59:42.014558 | orchestrator | Sunday 08 February 2026 04:59:31 +0000 (0:00:00.311) 0:00:06.430 ******* 2026-02-08 04:59:42.014562 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:42.014566 | orchestrator | 2026-02-08 04:59:42.014570 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-08 04:59:42.014574 | orchestrator | Sunday 08 February 2026 04:59:32 +0000 (0:00:00.264) 0:00:06.694 ******* 2026-02-08 04:59:42.014578 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:42.014582 | orchestrator | 2026-02-08 04:59:42.014586 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:42.014591 | orchestrator | Sunday 08 February 2026 04:59:32 +0000 (0:00:00.322) 0:00:07.016 ******* 2026-02-08 04:59:42.014595 | orchestrator | 2026-02-08 04:59:42.014599 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:42.014603 | orchestrator | Sunday 08 February 2026 04:59:32 +0000 (0:00:00.074) 0:00:07.091 ******* 2026-02-08 04:59:42.014608 | orchestrator | 2026-02-08 04:59:42.014612 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:42.014616 | orchestrator | Sunday 08 February 2026 04:59:32 +0000 (0:00:00.073) 0:00:07.165 ******* 2026-02-08 04:59:42.014624 | orchestrator | 2026-02-08 04:59:42.014628 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-08 04:59:42.014632 | orchestrator | Sunday 08 February 2026 04:59:32 +0000 (0:00:00.085) 0:00:07.250 ******* 2026-02-08 04:59:42.014636 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:42.014641 | orchestrator | 2026-02-08 04:59:42.014644 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-08 04:59:42.014649 | orchestrator | Sunday 08 February 2026 04:59:32 +0000 (0:00:00.264) 0:00:07.514 ******* 2026-02-08 04:59:42.014653 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:42.014657 | orchestrator | 2026-02-08 04:59:42.014673 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-08 04:59:42.014679 | orchestrator | Sunday 08 February 2026 04:59:33 +0000 (0:00:00.252) 0:00:07.766 ******* 2026-02-08 04:59:42.014683 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014688 | orchestrator | 2026-02-08 04:59:42.014693 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-08 04:59:42.014719 | orchestrator | Sunday 08 February 2026 04:59:33 +0000 (0:00:00.134) 0:00:07.900 ******* 2026-02-08 04:59:42.014724 | orchestrator | changed: [testbed-node-0] 2026-02-08 04:59:42.014729 | orchestrator | 2026-02-08 04:59:42.014733 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-08 04:59:42.014738 | orchestrator | Sunday 08 February 2026 04:59:35 +0000 (0:00:01.952) 0:00:09.853 ******* 2026-02-08 04:59:42.014743 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014747 | orchestrator | 2026-02-08 04:59:42.014768 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-08 04:59:42.014773 | orchestrator | Sunday 08 February 2026 04:59:35 +0000 (0:00:00.518) 0:00:10.372 ******* 2026-02-08 04:59:42.014778 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014782 | orchestrator | 2026-02-08 04:59:42.014787 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-08 04:59:42.014792 | orchestrator | Sunday 08 February 2026 04:59:36 +0000 (0:00:00.344) 0:00:10.716 ******* 2026-02-08 04:59:42.014797 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:42.014801 | orchestrator | 2026-02-08 04:59:42.014806 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-08 04:59:42.014810 | orchestrator | Sunday 08 February 2026 04:59:36 +0000 (0:00:00.161) 0:00:10.878 ******* 2026-02-08 04:59:42.014815 | orchestrator | ok: [testbed-node-0] 2026-02-08 04:59:42.014819 | orchestrator | 2026-02-08 04:59:42.014824 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-08 04:59:42.014829 | orchestrator | Sunday 08 February 2026 04:59:36 +0000 (0:00:00.146) 0:00:11.024 ******* 2026-02-08 04:59:42.014834 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:42.014838 | orchestrator | 2026-02-08 04:59:42.014843 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-08 04:59:42.014848 | orchestrator | Sunday 08 February 2026 04:59:36 +0000 (0:00:00.312) 0:00:11.337 ******* 2026-02-08 04:59:42.014853 | orchestrator | skipping: [testbed-node-0] 2026-02-08 04:59:42.014857 | orchestrator | 2026-02-08 04:59:42.014862 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-08 04:59:42.014867 | orchestrator | Sunday 08 February 2026 04:59:37 +0000 (0:00:00.285) 0:00:11.622 ******* 2026-02-08 04:59:42.014872 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:42.014876 | orchestrator | 2026-02-08 04:59:42.014881 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-08 04:59:42.014886 | orchestrator | Sunday 08 February 2026 04:59:38 +0000 (0:00:01.501) 0:00:13.124 ******* 2026-02-08 04:59:42.014891 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:42.014895 | orchestrator | 2026-02-08 04:59:42.014900 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-08 04:59:42.014905 | orchestrator | Sunday 08 February 2026 04:59:38 +0000 (0:00:00.346) 0:00:13.470 ******* 2026-02-08 04:59:42.014915 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:42.014920 | orchestrator | 2026-02-08 04:59:42.014924 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:42.014929 | orchestrator | Sunday 08 February 2026 04:59:39 +0000 (0:00:00.349) 0:00:13.819 ******* 2026-02-08 04:59:42.014934 | orchestrator | 2026-02-08 04:59:42.014938 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:42.014943 | orchestrator | Sunday 08 February 2026 04:59:39 +0000 (0:00:00.085) 0:00:13.905 ******* 2026-02-08 04:59:42.014948 | orchestrator | 2026-02-08 04:59:42.014953 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 04:59:42.014958 | orchestrator | Sunday 08 February 2026 04:59:39 +0000 (0:00:00.092) 0:00:13.997 ******* 2026-02-08 04:59:42.014962 | orchestrator | 2026-02-08 04:59:42.014967 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-08 04:59:42.014972 | orchestrator | Sunday 08 February 2026 04:59:39 +0000 (0:00:00.480) 0:00:14.478 ******* 2026-02-08 04:59:42.014977 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-08 04:59:42.014981 | orchestrator | 2026-02-08 04:59:42.014986 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-08 04:59:42.014990 | orchestrator | Sunday 08 February 2026 04:59:41 +0000 (0:00:01.540) 0:00:16.018 ******* 2026-02-08 04:59:42.014994 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-08 04:59:42.014998 | orchestrator |  "msg": [ 2026-02-08 04:59:42.015002 | orchestrator |  "Validator run completed.", 2026-02-08 04:59:42.015010 | orchestrator |  "You can find the report file here:", 2026-02-08 04:59:42.015014 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-08T04:59:26+00:00-report.json", 2026-02-08 04:59:42.015020 | orchestrator |  "on the following host:", 2026-02-08 04:59:42.015024 | orchestrator |  "testbed-manager" 2026-02-08 04:59:42.015028 | orchestrator |  ] 2026-02-08 04:59:42.015033 | orchestrator | } 2026-02-08 04:59:42.015037 | orchestrator | 2026-02-08 04:59:42.015041 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 04:59:42.015046 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-08 04:59:42.015051 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:59:42.015061 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 04:59:42.456365 | orchestrator | 2026-02-08 04:59:42.456458 | orchestrator | 2026-02-08 04:59:42.456469 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 04:59:42.456477 | orchestrator | Sunday 08 February 2026 04:59:41 +0000 (0:00:00.524) 0:00:16.542 ******* 2026-02-08 04:59:42.456484 | orchestrator | =============================================================================== 2026-02-08 04:59:42.456490 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.95s 2026-02-08 04:59:42.456497 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2026-02-08 04:59:42.456503 | orchestrator | Aggregate test results step one ----------------------------------------- 1.50s 2026-02-08 04:59:42.456510 | orchestrator | Get container info ------------------------------------------------------ 1.06s 2026-02-08 04:59:42.456516 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2026-02-08 04:59:42.456523 | orchestrator | Get timestamp for report file ------------------------------------------- 0.90s 2026-02-08 04:59:42.456529 | orchestrator | Flush handlers ---------------------------------------------------------- 0.66s 2026-02-08 04:59:42.456536 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.58s 2026-02-08 04:59:42.456565 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-02-08 04:59:42.456572 | orchestrator | Print report file information ------------------------------------------- 0.52s 2026-02-08 04:59:42.456578 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.52s 2026-02-08 04:59:42.456584 | orchestrator | Prepare test data ------------------------------------------------------- 0.38s 2026-02-08 04:59:42.456589 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.35s 2026-02-08 04:59:42.456595 | orchestrator | Aggregate test results step three --------------------------------------- 0.35s 2026-02-08 04:59:42.456602 | orchestrator | Aggregate test results step two ----------------------------------------- 0.35s 2026-02-08 04:59:42.456608 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.34s 2026-02-08 04:59:42.456614 | orchestrator | Aggregate test results step three --------------------------------------- 0.32s 2026-02-08 04:59:42.456621 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.31s 2026-02-08 04:59:42.456626 | orchestrator | Aggregate test results step one ----------------------------------------- 0.31s 2026-02-08 04:59:42.456632 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-02-08 04:59:42.923979 | orchestrator | + osism validate ceph-osds 2026-02-08 05:00:05.467014 | orchestrator | 2026-02-08 05:00:05.467110 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-08 05:00:05.467121 | orchestrator | 2026-02-08 05:00:05.467129 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-08 05:00:05.467136 | orchestrator | Sunday 08 February 2026 05:00:00 +0000 (0:00:00.498) 0:00:00.498 ******* 2026-02-08 05:00:05.467144 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 05:00:05.467151 | orchestrator | 2026-02-08 05:00:05.467158 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-08 05:00:05.467164 | orchestrator | Sunday 08 February 2026 05:00:01 +0000 (0:00:01.005) 0:00:01.504 ******* 2026-02-08 05:00:05.467170 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 05:00:05.467176 | orchestrator | 2026-02-08 05:00:05.467182 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-08 05:00:05.467189 | orchestrator | Sunday 08 February 2026 05:00:01 +0000 (0:00:00.598) 0:00:02.103 ******* 2026-02-08 05:00:05.467195 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 05:00:05.467201 | orchestrator | 2026-02-08 05:00:05.467207 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-08 05:00:05.467214 | orchestrator | Sunday 08 February 2026 05:00:02 +0000 (0:00:00.838) 0:00:02.941 ******* 2026-02-08 05:00:05.467220 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:05.467229 | orchestrator | 2026-02-08 05:00:05.467236 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-08 05:00:05.467243 | orchestrator | Sunday 08 February 2026 05:00:02 +0000 (0:00:00.147) 0:00:03.089 ******* 2026-02-08 05:00:05.467250 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:05.467256 | orchestrator | 2026-02-08 05:00:05.467263 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-08 05:00:05.467270 | orchestrator | Sunday 08 February 2026 05:00:02 +0000 (0:00:00.138) 0:00:03.228 ******* 2026-02-08 05:00:05.467277 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:05.467284 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:00:05.467306 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:00:05.467313 | orchestrator | 2026-02-08 05:00:05.467319 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-08 05:00:05.467326 | orchestrator | Sunday 08 February 2026 05:00:03 +0000 (0:00:00.335) 0:00:03.563 ******* 2026-02-08 05:00:05.467333 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:05.467340 | orchestrator | 2026-02-08 05:00:05.467346 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-08 05:00:05.467373 | orchestrator | Sunday 08 February 2026 05:00:03 +0000 (0:00:00.153) 0:00:03.716 ******* 2026-02-08 05:00:05.467380 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:05.467385 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:05.467395 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:05.467401 | orchestrator | 2026-02-08 05:00:05.467409 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-08 05:00:05.467416 | orchestrator | Sunday 08 February 2026 05:00:03 +0000 (0:00:00.363) 0:00:04.080 ******* 2026-02-08 05:00:05.467422 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:05.467429 | orchestrator | 2026-02-08 05:00:05.467436 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-08 05:00:05.467443 | orchestrator | Sunday 08 February 2026 05:00:04 +0000 (0:00:00.927) 0:00:05.007 ******* 2026-02-08 05:00:05.467449 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:05.467455 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:05.467461 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:05.467466 | orchestrator | 2026-02-08 05:00:05.467472 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-08 05:00:05.467479 | orchestrator | Sunday 08 February 2026 05:00:05 +0000 (0:00:00.448) 0:00:05.456 ******* 2026-02-08 05:00:05.467488 | orchestrator | skipping: [testbed-node-3] => (item={'id': '562e308cf4957b4549eae1108bacf874627283ad2fe7c9281f9b7c941b26c8d2', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-08 05:00:05.467497 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c87dd0a3ad06f55179e57e9514902825ddd9cc16e9c13e0ec4578298e597a39f', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-08 05:00:05.467505 | orchestrator | skipping: [testbed-node-3] => (item={'id': '705c812a99d24c37c0ee485b67b094c0dea1318cdcbd34eeb35d1acbba5c3023', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-08 05:00:05.467511 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c99711abd0afb2e56f898549fbe4961ef52dce11b24b08351ad4dae27949ee85', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-08 05:00:05.467517 | orchestrator | skipping: [testbed-node-3] => (item={'id': '36e5f0c104b1bc77ce90d00a19ddca8517ef19c1354f02d4c60364af7d545b70', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-08 05:00:05.467545 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5a2c86f537aa72ba2fc2d295498a895bba183ddf0b16fc90db9398788918f29c', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-08 05:00:05.467553 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1ff55349cc3fcb4ec1219a10ae843e11870505c7e47301ae5382880d8bb89820', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-08 05:00:05.467559 | orchestrator | skipping: [testbed-node-3] => (item={'id': '937983ffd518ee12a2fcc0e4d51bf114c80a6491ce1df81d308e50d818549d33', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-08 05:00:05.467566 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e1f2d8341caf98c0f913a16ef1523b0c333ec3dbc5b7ecd3160f6c9055c19a14', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.467580 | orchestrator | skipping: [testbed-node-3] => (item={'id': '24fe2a1f1c0e9497612b0a8744ed59bdce360fc4a3696f13ed622764c4fa9857', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.467589 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a592513ad0624465b7a8ba42f2fee73a82988672cf9f25b46478359a8d83abf9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.467597 | orchestrator | ok: [testbed-node-3] => (item={'id': 'e71d7991851e9c4e99e13a1d8b7aed21434689c1e00bcd64c96b9538990e93b4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-08 05:00:05.467605 | orchestrator | ok: [testbed-node-3] => (item={'id': '756f0d52f90a458377e43f154f1e907cff6b31fc34361d58a87e0b2ecde7f172', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-08 05:00:05.467612 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c5bed0d113e685cd2ef74c6ac830a90c6d02ca8d12e6b55275eb52c8251a41c1', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.467619 | orchestrator | skipping: [testbed-node-3] => (item={'id': '925453847d9148c802592bc166d5311158d4b7071ffb7e24b6c832d1aa00a1a1', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-08 05:00:05.467625 | orchestrator | skipping: [testbed-node-3] => (item={'id': '49baa0bb6cef9dd220afd0e74abe813ababf32009a88a6d13c27df60d31d0214', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-08 05:00:05.467633 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6d175d815b2d1b662704a78e1c638a8c4ef30d7ae0b72d4205e55c331e6bfb8c', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-08 05:00:05.467640 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f4d80a06b2e71c1d33e0010d61127d68b038190c094e4237f5cdbaa49aef2c3c', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-08 05:00:05.467647 | orchestrator | skipping: [testbed-node-3] => (item={'id': '36c2c57a9c05a98fb717ff1bdbc6d328a41507b13cbe7597ff6b82489910d74f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-08 05:00:05.467654 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1185450de71ceca35179ee124b8d0a26cce899bd066678a721e496cad6dc54c1', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-08 05:00:05.467667 | orchestrator | skipping: [testbed-node-4] => (item={'id': '830d7a98050a63cbd3760ce910891272f7bdb289d3ce159bde351cf5f8a5dfab', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-08 05:00:05.760619 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a7fa2fdc75d597cc2f0ccff848cca4596fb5ebd30b5a062fc8a05b38f2257aee', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-08 05:00:05.760800 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8930d71ba0d36bc36054d3344cc0550147016df06fb661b35cd51bcb90194b7c', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-08 05:00:05.760835 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8ae603c33aecde19556b233efd7582e0510b37d1c68e7ee2c4f8c0ef4af7455a', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-08 05:00:05.760845 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ae6db46b847db9c1ebade933ccff16754edeaae17356617aa8f45d17f2b3e186', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-08 05:00:05.760855 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b7a4fb228be0cd22e43fd6d2be889a96175d09412e99836842d8951f1157adf3', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-08 05:00:05.760861 | orchestrator | skipping: [testbed-node-4] => (item={'id': '290ae56ab323831b6edeee92577d7cebe218a66ce2895eff2951ce44f35d93da', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-08 05:00:05.760868 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fe10a66a673bc6f126f8aeb9c75eaff8f3e94eb8e9cfc8b8aa571230de7467af', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.760875 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bfb7452931d021738fcfce54cb64614eac2727d18c9c456c27b0ab0de2050b43', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.760881 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2bd96718d42d132eca2963a29d4cf2ba93d7dc77154aaca1c3f6073093d7f2ac', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.760890 | orchestrator | ok: [testbed-node-4] => (item={'id': '96940b1b7ad2b8d552589addb8f61a26e95068562017723169f384e1abb9c293', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-08 05:00:05.760899 | orchestrator | ok: [testbed-node-4] => (item={'id': '6cba2ef894f6eb94ce13c8c34a4c1cb93f6bbb54abd21d758f2158c44dfcd437', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-08 05:00:05.760905 | orchestrator | skipping: [testbed-node-4] => (item={'id': '85871ec76305405d0b85b931e151836d7b8ab00f4a6a0aefb4e104a74ac89683', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.760912 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7276e9deaa617161dfadaa6f2a1e4d817c37a879e6f552ada2296358403ea420', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-08 05:00:05.760919 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0686baaf479981408bfd65d7f6e53e3ecfd89d1d554954dea52a192b77dbcfa3', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-08 05:00:05.760943 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fdcb8b7336b293c36f0775c6a93e127714a28a49f0e3a1562f27d36384d1442f', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-08 05:00:05.760957 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd950b18052447e3f7f176e12d89e3967e3734e5294da8ed6847937c489e5cec8', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-08 05:00:05.760965 | orchestrator | skipping: [testbed-node-4] => (item={'id': '794a39415e2cf772ebce8eadbaea87e596d78f91a0db07c47f0e4e397b351d4d', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-08 05:00:05.760971 | orchestrator | skipping: [testbed-node-5] => (item={'id': '326add132ab314db24212fa3a6d1d8ea4b9d346e14655ad7c1153ad305a9f5d8', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-08 05:00:05.760978 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd8d90390bf0d978ba7603c07b249fe6a7b2abc36b9abdb56833f14f030eee5dd', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-08 05:00:05.760989 | orchestrator | skipping: [testbed-node-5] => (item={'id': '19e8ae57b178a5a0a4f912588f36004e0837e56b1c92ef733d1ba76e78f70db7', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-08 05:00:05.760996 | orchestrator | skipping: [testbed-node-5] => (item={'id': '31c10c7cbba9ff742df563dad1557c1221e9a2b078bec7e22b954759548305c4', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-08 05:00:05.761003 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3634083e06810155690b3ea1de9c7af4ffc4f7dfa3c5e8509e7a753daf366281', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-08 05:00:05.761010 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eab1e13fb1b87e50772339a66b3080be65588088b2f9819add7262b1087d5722', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-08 05:00:05.761017 | orchestrator | skipping: [testbed-node-5] => (item={'id': '54dcea7d5200484c0dd8e35d5dd9cd016a06ec240ed5631787153145aef9f84a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-08 05:00:05.761024 | orchestrator | skipping: [testbed-node-5] => (item={'id': '34ec04a78eab7b0a554b4f5a94190c2eda308d72f5b0b73e0bf986c81cfa9949', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-08 05:00:05.761030 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cb1d8d49c53653e4e7d792567fd9c4d83891d7d14946e5573b6cf22529639252', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.761037 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a5de3abdb7c411224787eb9619d7f08a2b9149cd69bace0c1701900928a9c71', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.761044 | orchestrator | skipping: [testbed-node-5] => (item={'id': '60ad636eb33e21140564ebaca8c12afd73b5e81c72c6b0b503bfa84cda883f7a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:05.761055 | orchestrator | ok: [testbed-node-5] => (item={'id': 'f248a459906ab2092331f2813d0bed7aeb86c6391e0edf44131dc7c852f7cfca', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-08 05:00:05.761067 | orchestrator | ok: [testbed-node-5] => (item={'id': '1391741a29cb1775698ad74da2d33c002eeb8185c4edf7863e5f5b24de03d184', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-08 05:00:17.726066 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0c964ca77e621f8407f7e013729b19bd976946999c7902347859b93732c3584e', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-08 05:00:17.726197 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1ef3df55fbd8571324f5b4e16a6a91c07a48a7662bbc7e7c90f186f9dfc1c700', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-08 05:00:17.726221 | orchestrator | skipping: [testbed-node-5] => (item={'id': '22b4615c8faad991c84cf10e321264dcadbb4d4a4196d12412db9f765149ebef', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-08 05:00:17.726245 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5e722e827781d17c056754f3e24a8463777c0997d8eb5dfe7745a15d62884562', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-08 05:00:17.726277 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9bbb233042ca7d325e7fc53f58b6926e9d20b87ac3f525fa7514a75b65684727', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-08 05:00:17.726292 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6cbcb5556175a5b7ae12e8a55cfb184652955480ab2c3a28aa3d872d212bd1cb', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-08 05:00:17.726305 | orchestrator | 2026-02-08 05:00:17.726316 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-08 05:00:17.726325 | orchestrator | Sunday 08 February 2026 05:00:05 +0000 (0:00:00.616) 0:00:06.072 ******* 2026-02-08 05:00:17.726332 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.726341 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:17.726348 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:17.726355 | orchestrator | 2026-02-08 05:00:17.726363 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-08 05:00:17.726371 | orchestrator | Sunday 08 February 2026 05:00:06 +0000 (0:00:00.357) 0:00:06.429 ******* 2026-02-08 05:00:17.726378 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.726387 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:00:17.726394 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:00:17.726402 | orchestrator | 2026-02-08 05:00:17.726409 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-08 05:00:17.726417 | orchestrator | Sunday 08 February 2026 05:00:06 +0000 (0:00:00.562) 0:00:06.991 ******* 2026-02-08 05:00:17.726424 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.726432 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:17.726439 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:17.726446 | orchestrator | 2026-02-08 05:00:17.726454 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-08 05:00:17.726461 | orchestrator | Sunday 08 February 2026 05:00:06 +0000 (0:00:00.325) 0:00:07.317 ******* 2026-02-08 05:00:17.726469 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.726476 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:17.726503 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:17.726511 | orchestrator | 2026-02-08 05:00:17.726519 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-08 05:00:17.726526 | orchestrator | Sunday 08 February 2026 05:00:07 +0000 (0:00:00.320) 0:00:07.637 ******* 2026-02-08 05:00:17.726534 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-08 05:00:17.726542 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-08 05:00:17.726550 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.726557 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-08 05:00:17.726567 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-08 05:00:17.726577 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:00:17.726585 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-08 05:00:17.726594 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-08 05:00:17.726602 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:00:17.726611 | orchestrator | 2026-02-08 05:00:17.726619 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-08 05:00:17.726628 | orchestrator | Sunday 08 February 2026 05:00:07 +0000 (0:00:00.312) 0:00:07.950 ******* 2026-02-08 05:00:17.726637 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.726649 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:17.726662 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:17.726727 | orchestrator | 2026-02-08 05:00:17.726747 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-08 05:00:17.726757 | orchestrator | Sunday 08 February 2026 05:00:08 +0000 (0:00:00.590) 0:00:08.541 ******* 2026-02-08 05:00:17.726766 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.726792 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:00:17.726801 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:00:17.726810 | orchestrator | 2026-02-08 05:00:17.726819 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-08 05:00:17.726829 | orchestrator | Sunday 08 February 2026 05:00:08 +0000 (0:00:00.374) 0:00:08.916 ******* 2026-02-08 05:00:17.726838 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.726847 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:00:17.726856 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:00:17.726865 | orchestrator | 2026-02-08 05:00:17.726874 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-08 05:00:17.726883 | orchestrator | Sunday 08 February 2026 05:00:08 +0000 (0:00:00.373) 0:00:09.289 ******* 2026-02-08 05:00:17.726892 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.726900 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:17.726909 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:17.726919 | orchestrator | 2026-02-08 05:00:17.726928 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-08 05:00:17.726935 | orchestrator | Sunday 08 February 2026 05:00:09 +0000 (0:00:00.327) 0:00:09.617 ******* 2026-02-08 05:00:17.726943 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.726951 | orchestrator | 2026-02-08 05:00:17.726958 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-08 05:00:17.726966 | orchestrator | Sunday 08 February 2026 05:00:10 +0000 (0:00:00.796) 0:00:10.413 ******* 2026-02-08 05:00:17.726973 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.726981 | orchestrator | 2026-02-08 05:00:17.726988 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-08 05:00:17.726996 | orchestrator | Sunday 08 February 2026 05:00:10 +0000 (0:00:00.293) 0:00:10.707 ******* 2026-02-08 05:00:17.727003 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.727010 | orchestrator | 2026-02-08 05:00:17.727018 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 05:00:17.727033 | orchestrator | Sunday 08 February 2026 05:00:10 +0000 (0:00:00.294) 0:00:11.001 ******* 2026-02-08 05:00:17.727041 | orchestrator | 2026-02-08 05:00:17.727048 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 05:00:17.727056 | orchestrator | Sunday 08 February 2026 05:00:10 +0000 (0:00:00.075) 0:00:11.076 ******* 2026-02-08 05:00:17.727064 | orchestrator | 2026-02-08 05:00:17.727071 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 05:00:17.727078 | orchestrator | Sunday 08 February 2026 05:00:10 +0000 (0:00:00.072) 0:00:11.149 ******* 2026-02-08 05:00:17.727086 | orchestrator | 2026-02-08 05:00:17.727093 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-08 05:00:17.727100 | orchestrator | Sunday 08 February 2026 05:00:10 +0000 (0:00:00.082) 0:00:11.231 ******* 2026-02-08 05:00:17.727108 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.727115 | orchestrator | 2026-02-08 05:00:17.727122 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-08 05:00:17.727130 | orchestrator | Sunday 08 February 2026 05:00:11 +0000 (0:00:00.286) 0:00:11.517 ******* 2026-02-08 05:00:17.727137 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.727144 | orchestrator | 2026-02-08 05:00:17.727152 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-08 05:00:17.727159 | orchestrator | Sunday 08 February 2026 05:00:11 +0000 (0:00:00.252) 0:00:11.770 ******* 2026-02-08 05:00:17.727167 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.727174 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:17.727181 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:17.727190 | orchestrator | 2026-02-08 05:00:17.727203 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-08 05:00:17.727215 | orchestrator | Sunday 08 February 2026 05:00:11 +0000 (0:00:00.321) 0:00:12.092 ******* 2026-02-08 05:00:17.727227 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.727239 | orchestrator | 2026-02-08 05:00:17.727250 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-08 05:00:17.727263 | orchestrator | Sunday 08 February 2026 05:00:12 +0000 (0:00:00.757) 0:00:12.849 ******* 2026-02-08 05:00:17.727275 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:00:17.727288 | orchestrator | 2026-02-08 05:00:17.727300 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-08 05:00:17.727312 | orchestrator | Sunday 08 February 2026 05:00:14 +0000 (0:00:01.586) 0:00:14.435 ******* 2026-02-08 05:00:17.727325 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.727338 | orchestrator | 2026-02-08 05:00:17.727351 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-08 05:00:17.727363 | orchestrator | Sunday 08 February 2026 05:00:14 +0000 (0:00:00.154) 0:00:14.590 ******* 2026-02-08 05:00:17.727370 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.727378 | orchestrator | 2026-02-08 05:00:17.727385 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-08 05:00:17.727393 | orchestrator | Sunday 08 February 2026 05:00:14 +0000 (0:00:00.364) 0:00:14.955 ******* 2026-02-08 05:00:17.727400 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:17.727407 | orchestrator | 2026-02-08 05:00:17.727415 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-08 05:00:17.727422 | orchestrator | Sunday 08 February 2026 05:00:14 +0000 (0:00:00.135) 0:00:15.090 ******* 2026-02-08 05:00:17.727429 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.727437 | orchestrator | 2026-02-08 05:00:17.727444 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-08 05:00:17.727451 | orchestrator | Sunday 08 February 2026 05:00:14 +0000 (0:00:00.141) 0:00:15.232 ******* 2026-02-08 05:00:17.727459 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:17.727466 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:17.727474 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:17.727490 | orchestrator | 2026-02-08 05:00:17.727497 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-08 05:00:17.727505 | orchestrator | Sunday 08 February 2026 05:00:15 +0000 (0:00:00.319) 0:00:15.552 ******* 2026-02-08 05:00:17.727512 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:00:17.727520 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:00:17.727527 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:00:28.747587 | orchestrator | 2026-02-08 05:00:28.747703 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-08 05:00:28.747716 | orchestrator | Sunday 08 February 2026 05:00:17 +0000 (0:00:02.479) 0:00:18.032 ******* 2026-02-08 05:00:28.747723 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:28.747731 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:28.747738 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:28.747745 | orchestrator | 2026-02-08 05:00:28.747751 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-08 05:00:28.747758 | orchestrator | Sunday 08 February 2026 05:00:18 +0000 (0:00:00.367) 0:00:18.399 ******* 2026-02-08 05:00:28.747765 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:28.747771 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:28.747777 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:28.747783 | orchestrator | 2026-02-08 05:00:28.747789 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-08 05:00:28.747795 | orchestrator | Sunday 08 February 2026 05:00:18 +0000 (0:00:00.568) 0:00:18.967 ******* 2026-02-08 05:00:28.747802 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:28.747809 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:00:28.747815 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:00:28.747821 | orchestrator | 2026-02-08 05:00:28.747828 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-08 05:00:28.747833 | orchestrator | Sunday 08 February 2026 05:00:18 +0000 (0:00:00.348) 0:00:19.316 ******* 2026-02-08 05:00:28.747840 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:28.747846 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:28.747852 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:28.747858 | orchestrator | 2026-02-08 05:00:28.747864 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-08 05:00:28.747873 | orchestrator | Sunday 08 February 2026 05:00:19 +0000 (0:00:00.607) 0:00:19.923 ******* 2026-02-08 05:00:28.747880 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:28.747886 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:00:28.747892 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:00:28.747898 | orchestrator | 2026-02-08 05:00:28.747906 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-08 05:00:28.747912 | orchestrator | Sunday 08 February 2026 05:00:19 +0000 (0:00:00.314) 0:00:20.238 ******* 2026-02-08 05:00:28.747918 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:28.747924 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:00:28.747931 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:00:28.747936 | orchestrator | 2026-02-08 05:00:28.747943 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-08 05:00:28.747949 | orchestrator | Sunday 08 February 2026 05:00:20 +0000 (0:00:00.341) 0:00:20.580 ******* 2026-02-08 05:00:28.747955 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:28.747961 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:28.747967 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:28.747973 | orchestrator | 2026-02-08 05:00:28.747979 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-08 05:00:28.747986 | orchestrator | Sunday 08 February 2026 05:00:20 +0000 (0:00:00.557) 0:00:21.138 ******* 2026-02-08 05:00:28.747992 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:28.747998 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:28.748004 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:28.748010 | orchestrator | 2026-02-08 05:00:28.748016 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-08 05:00:28.748040 | orchestrator | Sunday 08 February 2026 05:00:21 +0000 (0:00:00.829) 0:00:21.967 ******* 2026-02-08 05:00:28.748046 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:28.748053 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:28.748059 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:28.748064 | orchestrator | 2026-02-08 05:00:28.748070 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-08 05:00:28.748077 | orchestrator | Sunday 08 February 2026 05:00:21 +0000 (0:00:00.344) 0:00:22.312 ******* 2026-02-08 05:00:28.748083 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:28.748089 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:00:28.748095 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:00:28.748102 | orchestrator | 2026-02-08 05:00:28.748108 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-08 05:00:28.748114 | orchestrator | Sunday 08 February 2026 05:00:22 +0000 (0:00:00.315) 0:00:22.628 ******* 2026-02-08 05:00:28.748120 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:00:28.748126 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:00:28.748132 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:00:28.748138 | orchestrator | 2026-02-08 05:00:28.748145 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-08 05:00:28.748151 | orchestrator | Sunday 08 February 2026 05:00:22 +0000 (0:00:00.558) 0:00:23.186 ******* 2026-02-08 05:00:28.748159 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 05:00:28.748167 | orchestrator | 2026-02-08 05:00:28.748174 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-08 05:00:28.748182 | orchestrator | Sunday 08 February 2026 05:00:23 +0000 (0:00:00.277) 0:00:23.464 ******* 2026-02-08 05:00:28.748190 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:00:28.748197 | orchestrator | 2026-02-08 05:00:28.748205 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-08 05:00:28.748213 | orchestrator | Sunday 08 February 2026 05:00:23 +0000 (0:00:00.266) 0:00:23.730 ******* 2026-02-08 05:00:28.748220 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 05:00:28.748228 | orchestrator | 2026-02-08 05:00:28.748236 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-08 05:00:28.748242 | orchestrator | Sunday 08 February 2026 05:00:25 +0000 (0:00:01.842) 0:00:25.573 ******* 2026-02-08 05:00:28.748248 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 05:00:28.748255 | orchestrator | 2026-02-08 05:00:28.748261 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-08 05:00:28.748267 | orchestrator | Sunday 08 February 2026 05:00:25 +0000 (0:00:00.312) 0:00:25.885 ******* 2026-02-08 05:00:28.748273 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 05:00:28.748280 | orchestrator | 2026-02-08 05:00:28.748299 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 05:00:28.748306 | orchestrator | Sunday 08 February 2026 05:00:25 +0000 (0:00:00.262) 0:00:26.148 ******* 2026-02-08 05:00:28.748312 | orchestrator | 2026-02-08 05:00:28.748318 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 05:00:28.748325 | orchestrator | Sunday 08 February 2026 05:00:25 +0000 (0:00:00.070) 0:00:26.219 ******* 2026-02-08 05:00:28.748331 | orchestrator | 2026-02-08 05:00:28.748337 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-08 05:00:28.748344 | orchestrator | Sunday 08 February 2026 05:00:25 +0000 (0:00:00.071) 0:00:26.291 ******* 2026-02-08 05:00:28.748350 | orchestrator | 2026-02-08 05:00:28.748356 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-08 05:00:28.748362 | orchestrator | Sunday 08 February 2026 05:00:26 +0000 (0:00:00.075) 0:00:26.366 ******* 2026-02-08 05:00:28.748368 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-08 05:00:28.748374 | orchestrator | 2026-02-08 05:00:28.748381 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-08 05:00:28.748391 | orchestrator | Sunday 08 February 2026 05:00:27 +0000 (0:00:01.642) 0:00:28.009 ******* 2026-02-08 05:00:28.748398 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-08 05:00:28.748404 | orchestrator |  "msg": [ 2026-02-08 05:00:28.748410 | orchestrator |  "Validator run completed.", 2026-02-08 05:00:28.748417 | orchestrator |  "You can find the report file here:", 2026-02-08 05:00:28.748423 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-08T05:00:01+00:00-report.json", 2026-02-08 05:00:28.748433 | orchestrator |  "on the following host:", 2026-02-08 05:00:28.748439 | orchestrator |  "testbed-manager" 2026-02-08 05:00:28.748445 | orchestrator |  ] 2026-02-08 05:00:28.748452 | orchestrator | } 2026-02-08 05:00:28.748458 | orchestrator | 2026-02-08 05:00:28.748464 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:00:28.748471 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 05:00:28.748479 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-08 05:00:28.748485 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-08 05:00:28.748491 | orchestrator | 2026-02-08 05:00:28.748497 | orchestrator | 2026-02-08 05:00:28.748503 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:00:28.748510 | orchestrator | Sunday 08 February 2026 05:00:28 +0000 (0:00:00.648) 0:00:28.658 ******* 2026-02-08 05:00:28.748516 | orchestrator | =============================================================================== 2026-02-08 05:00:28.748522 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.48s 2026-02-08 05:00:28.748528 | orchestrator | Aggregate test results step one ----------------------------------------- 1.84s 2026-02-08 05:00:28.748534 | orchestrator | Write report file ------------------------------------------------------- 1.64s 2026-02-08 05:00:28.748541 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.59s 2026-02-08 05:00:28.748547 | orchestrator | Get timestamp for report file ------------------------------------------- 1.01s 2026-02-08 05:00:28.748553 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.93s 2026-02-08 05:00:28.748559 | orchestrator | Create report output directory ------------------------------------------ 0.84s 2026-02-08 05:00:28.748565 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.83s 2026-02-08 05:00:28.748572 | orchestrator | Aggregate test results step one ----------------------------------------- 0.80s 2026-02-08 05:00:28.748578 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.76s 2026-02-08 05:00:28.748584 | orchestrator | Print report file information ------------------------------------------- 0.65s 2026-02-08 05:00:28.748590 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.62s 2026-02-08 05:00:28.748596 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.61s 2026-02-08 05:00:28.748602 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.60s 2026-02-08 05:00:28.748608 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.59s 2026-02-08 05:00:28.748615 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.57s 2026-02-08 05:00:28.748621 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.56s 2026-02-08 05:00:28.748627 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.56s 2026-02-08 05:00:28.748634 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2026-02-08 05:00:28.748640 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2026-02-08 05:00:29.170619 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-08 05:00:29.178474 | orchestrator | + set -e 2026-02-08 05:00:29.178556 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 05:00:29.178569 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 05:00:29.178578 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 05:00:29.178586 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 05:00:29.178595 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 05:00:29.178603 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 05:00:29.178613 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 05:00:29.178622 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 05:00:29.178631 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 05:00:29.178639 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 05:00:29.178648 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 05:00:29.178656 | orchestrator | ++ export ARA=false 2026-02-08 05:00:29.178710 | orchestrator | ++ ARA=false 2026-02-08 05:00:29.178720 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 05:00:29.178729 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 05:00:29.178736 | orchestrator | ++ export TEMPEST=false 2026-02-08 05:00:29.178745 | orchestrator | ++ TEMPEST=false 2026-02-08 05:00:29.178754 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 05:00:29.178762 | orchestrator | ++ IS_ZUUL=true 2026-02-08 05:00:29.178770 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 05:00:29.178779 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 05:00:29.178788 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 05:00:29.178796 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 05:00:29.178805 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 05:00:29.178813 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 05:00:29.178822 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 05:00:29.178830 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 05:00:29.178838 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 05:00:29.178846 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 05:00:29.178854 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-08 05:00:29.178863 | orchestrator | + source /etc/os-release 2026-02-08 05:00:29.178872 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-02-08 05:00:29.178881 | orchestrator | ++ NAME=Ubuntu 2026-02-08 05:00:29.178891 | orchestrator | ++ VERSION_ID=24.04 2026-02-08 05:00:29.178899 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-02-08 05:00:29.178905 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-08 05:00:29.178910 | orchestrator | ++ ID=ubuntu 2026-02-08 05:00:29.178916 | orchestrator | ++ ID_LIKE=debian 2026-02-08 05:00:29.178921 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-08 05:00:29.178926 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-08 05:00:29.178932 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-08 05:00:29.178938 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-08 05:00:29.178945 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-08 05:00:29.178950 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-08 05:00:29.178958 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-08 05:00:29.178968 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-08 05:00:29.178978 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-08 05:00:29.207751 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-08 05:00:52.642204 | orchestrator | 2026-02-08 05:00:52.642321 | orchestrator | # Status of Elasticsearch 2026-02-08 05:00:52.642341 | orchestrator | 2026-02-08 05:00:52.642356 | orchestrator | + pushd /opt/configuration/contrib 2026-02-08 05:00:52.642372 | orchestrator | + echo 2026-02-08 05:00:52.642387 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-08 05:00:52.642401 | orchestrator | + echo 2026-02-08 05:00:52.642415 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-08 05:00:52.812451 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-08 05:00:52.812532 | orchestrator | 2026-02-08 05:00:52.812542 | orchestrator | # Status of MariaDB 2026-02-08 05:00:52.812550 | orchestrator | 2026-02-08 05:00:52.812557 | orchestrator | + echo 2026-02-08 05:00:52.812590 | orchestrator | + echo '# Status of MariaDB' 2026-02-08 05:00:52.812597 | orchestrator | + echo 2026-02-08 05:00:52.812945 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-08 05:00:52.860686 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-08 05:00:52.860763 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-08 05:00:52.860773 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-08 05:00:52.860781 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-08 05:00:52.927489 | orchestrator | Reading package lists... 2026-02-08 05:00:53.302533 | orchestrator | Building dependency tree... 2026-02-08 05:00:53.303196 | orchestrator | Reading state information... 2026-02-08 05:00:53.779376 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-08 05:00:53.779477 | orchestrator | bc set to manually installed. 2026-02-08 05:00:53.779491 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-08 05:00:54.537707 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-08 05:00:54.539195 | orchestrator | 2026-02-08 05:00:54.539240 | orchestrator | # Status of Prometheus 2026-02-08 05:00:54.539247 | orchestrator | 2026-02-08 05:00:54.539253 | orchestrator | + echo 2026-02-08 05:00:54.539258 | orchestrator | + echo '# Status of Prometheus' 2026-02-08 05:00:54.539263 | orchestrator | + echo 2026-02-08 05:00:54.539268 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-08 05:00:54.622545 | orchestrator | Unauthorized 2026-02-08 05:00:54.628116 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-08 05:00:54.697078 | orchestrator | Unauthorized 2026-02-08 05:00:54.700513 | orchestrator | 2026-02-08 05:00:54.700587 | orchestrator | # Status of RabbitMQ 2026-02-08 05:00:54.700599 | orchestrator | 2026-02-08 05:00:54.700612 | orchestrator | + echo 2026-02-08 05:00:54.700628 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-08 05:00:54.700755 | orchestrator | + echo 2026-02-08 05:00:54.701594 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-08 05:00:54.763003 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-08 05:00:54.763113 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-08 05:00:54.763131 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-08 05:00:55.294729 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-08 05:00:55.304017 | orchestrator | 2026-02-08 05:00:55.304108 | orchestrator | # Status of Redis 2026-02-08 05:00:55.304121 | orchestrator | 2026-02-08 05:00:55.304131 | orchestrator | + echo 2026-02-08 05:00:55.304141 | orchestrator | + echo '# Status of Redis' 2026-02-08 05:00:55.304150 | orchestrator | + echo 2026-02-08 05:00:55.304161 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-08 05:00:55.310819 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001818s;;;0.000000;10.000000 2026-02-08 05:00:55.310908 | orchestrator | 2026-02-08 05:00:55.310919 | orchestrator | # Create backup of MariaDB database 2026-02-08 05:00:55.310929 | orchestrator | 2026-02-08 05:00:55.310938 | orchestrator | + popd 2026-02-08 05:00:55.310952 | orchestrator | + echo 2026-02-08 05:00:55.310970 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-08 05:00:55.310988 | orchestrator | + echo 2026-02-08 05:00:55.311002 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-08 05:00:57.476106 | orchestrator | 2026-02-08 05:00:57 | INFO  | Task 284c4f74-d690-44bc-95d4-a3a5846af145 (mariadb_backup) was prepared for execution. 2026-02-08 05:00:57.476196 | orchestrator | 2026-02-08 05:00:57 | INFO  | It takes a moment until task 284c4f74-d690-44bc-95d4-a3a5846af145 (mariadb_backup) has been started and output is visible here. 2026-02-08 05:02:27.280778 | orchestrator | 2026-02-08 05:02:27.280897 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 05:02:27.280915 | orchestrator | 2026-02-08 05:02:27.280927 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 05:02:27.280940 | orchestrator | Sunday 08 February 2026 05:01:02 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-08 05:02:27.280951 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:02:27.280963 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:02:27.280975 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:02:27.280986 | orchestrator | 2026-02-08 05:02:27.281025 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 05:02:27.281037 | orchestrator | Sunday 08 February 2026 05:01:02 +0000 (0:00:00.346) 0:00:00.533 ******* 2026-02-08 05:02:27.281048 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-08 05:02:27.281061 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-08 05:02:27.281072 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-08 05:02:27.281083 | orchestrator | 2026-02-08 05:02:27.281094 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-08 05:02:27.281105 | orchestrator | 2026-02-08 05:02:27.281117 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-08 05:02:27.281128 | orchestrator | Sunday 08 February 2026 05:01:03 +0000 (0:00:00.634) 0:00:01.168 ******* 2026-02-08 05:02:27.281139 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:02:27.281151 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-08 05:02:27.281162 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-08 05:02:27.281173 | orchestrator | 2026-02-08 05:02:27.281184 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-08 05:02:27.281195 | orchestrator | Sunday 08 February 2026 05:01:03 +0000 (0:00:00.426) 0:00:01.595 ******* 2026-02-08 05:02:27.281206 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:02:27.281220 | orchestrator | 2026-02-08 05:02:27.281234 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-08 05:02:27.281261 | orchestrator | Sunday 08 February 2026 05:01:04 +0000 (0:00:00.565) 0:00:02.160 ******* 2026-02-08 05:02:27.281275 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:02:27.281288 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:02:27.281301 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:02:27.281314 | orchestrator | 2026-02-08 05:02:27.281327 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-08 05:02:27.281340 | orchestrator | Sunday 08 February 2026 05:01:07 +0000 (0:00:03.490) 0:00:05.651 ******* 2026-02-08 05:02:27.281354 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-08 05:02:27.281368 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-08 05:02:27.281382 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-08 05:02:27.281396 | orchestrator | mariadb_bootstrap_restart 2026-02-08 05:02:27.281410 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:02:27.281423 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:02:27.281436 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:02:27.281449 | orchestrator | 2026-02-08 05:02:27.281462 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-08 05:02:27.281475 | orchestrator | skipping: no hosts matched 2026-02-08 05:02:27.281487 | orchestrator | 2026-02-08 05:02:27.281500 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-08 05:02:27.281512 | orchestrator | skipping: no hosts matched 2026-02-08 05:02:27.281531 | orchestrator | 2026-02-08 05:02:27.281550 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-08 05:02:27.281569 | orchestrator | skipping: no hosts matched 2026-02-08 05:02:27.281710 | orchestrator | 2026-02-08 05:02:27.281730 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-08 05:02:27.281747 | orchestrator | 2026-02-08 05:02:27.281765 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-08 05:02:27.281782 | orchestrator | Sunday 08 February 2026 05:02:26 +0000 (0:01:18.487) 0:01:24.138 ******* 2026-02-08 05:02:27.281799 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:02:27.281817 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:02:27.281827 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:02:27.281837 | orchestrator | 2026-02-08 05:02:27.281847 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-08 05:02:27.281869 | orchestrator | Sunday 08 February 2026 05:02:26 +0000 (0:00:00.400) 0:01:24.539 ******* 2026-02-08 05:02:27.281879 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:02:27.281889 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:02:27.281899 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:02:27.281908 | orchestrator | 2026-02-08 05:02:27.281979 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:02:27.281991 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 05:02:27.282002 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 05:02:27.282079 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 05:02:27.282105 | orchestrator | 2026-02-08 05:02:27.282122 | orchestrator | 2026-02-08 05:02:27.282137 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:02:27.282153 | orchestrator | Sunday 08 February 2026 05:02:26 +0000 (0:00:00.419) 0:01:24.958 ******* 2026-02-08 05:02:27.282170 | orchestrator | =============================================================================== 2026-02-08 05:02:27.282187 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 78.49s 2026-02-08 05:02:27.282230 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.49s 2026-02-08 05:02:27.282243 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-02-08 05:02:27.282253 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.57s 2026-02-08 05:02:27.282263 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2026-02-08 05:02:27.282272 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.42s 2026-02-08 05:02:27.282282 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.40s 2026-02-08 05:02:27.282292 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-02-08 05:02:27.661657 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-08 05:02:27.668911 | orchestrator | + set -e 2026-02-08 05:02:27.669002 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 05:02:27.669476 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 05:02:27.669494 | orchestrator | ++ INTERACTIVE=false 2026-02-08 05:02:27.669501 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 05:02:27.669507 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 05:02:27.669738 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-08 05:02:27.671772 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-08 05:02:27.679437 | orchestrator | 2026-02-08 05:02:27.679501 | orchestrator | # OpenStack endpoints 2026-02-08 05:02:27.679509 | orchestrator | 2026-02-08 05:02:27.679516 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 05:02:27.679523 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 05:02:27.679529 | orchestrator | + export OS_CLOUD=admin 2026-02-08 05:02:27.679536 | orchestrator | + OS_CLOUD=admin 2026-02-08 05:02:27.679542 | orchestrator | + echo 2026-02-08 05:02:27.679549 | orchestrator | + echo '# OpenStack endpoints' 2026-02-08 05:02:27.679555 | orchestrator | + echo 2026-02-08 05:02:27.679562 | orchestrator | + openstack endpoint list 2026-02-08 05:02:30.987341 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-08 05:02:30.987463 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-08 05:02:30.987477 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-08 05:02:30.987523 | orchestrator | | 010f1a2511a346e694ceed0ce8deb540 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-08 05:02:30.987548 | orchestrator | | 0ccc2cdc9bc84d7ab96334a451cf37d9 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-08 05:02:30.987558 | orchestrator | | 15b2f60695b4406e946b84669937d370 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-08 05:02:30.987567 | orchestrator | | 230659ee19984bbe8b29b5c772185ffc | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-08 05:02:30.987653 | orchestrator | | 2e8963aa6fb24db98d23f13b9945a83e | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-08 05:02:30.987666 | orchestrator | | 36b5b0f5287f485baa345215dd2ad507 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-08 05:02:30.987675 | orchestrator | | 52bb83c72ac24b72a9f3d7c8067986ad | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-08 05:02:30.987684 | orchestrator | | 53d070ced93542dbaeb6a986e58e8c97 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-08 05:02:30.987693 | orchestrator | | 674551f36f7c492096b7257312255445 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-08 05:02:30.987702 | orchestrator | | 7989f441fafa41a1bf422a208bebd938 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-08 05:02:30.987711 | orchestrator | | 7a4d56c2eec44cffb5eb27ec4bf2ec12 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-08 05:02:30.987720 | orchestrator | | 7b13836348e64ed78c5aa787a7f0c560 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-08 05:02:30.987729 | orchestrator | | 8125fe7021bf401ebda872e23b1514aa | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-08 05:02:30.987737 | orchestrator | | 81d0ef4363af4f3bb41c7cb6eff85ba6 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-08 05:02:30.987746 | orchestrator | | 8d4f244882664cbb94b0a41a1cf4211d | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-08 05:02:30.987772 | orchestrator | | 8ff10a8ec12e47a7a7524f917d2f5e8e | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-08 05:02:30.987781 | orchestrator | | 95c602f27b8f43d5ae1277f4a83e4216 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-08 05:02:30.987789 | orchestrator | | a5db3089562443d6a5d1a60279b82532 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-08 05:02:30.987798 | orchestrator | | aa35d3ed47034b95b11a052d4a18f027 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-08 05:02:30.987807 | orchestrator | | b3a9beeafb2f4929af2e20898b22e76a | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-08 05:02:30.987833 | orchestrator | | c76131910be146c896fe811cdcb5e7d8 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-08 05:02:30.987852 | orchestrator | | c9909595e539401cb19d4c7a2f310be4 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-08 05:02:30.987867 | orchestrator | | ca80f0239e5f487d944cbb7f524802ab | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-08 05:02:30.987878 | orchestrator | | de3d0fe2803b4d6ca718614409b175a7 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-08 05:02:30.987889 | orchestrator | | de533e62d8904ed8adc24468b0f1af8c | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-08 05:02:30.987899 | orchestrator | | e11eeb1e912044979fd3d9a331b0ccdb | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-08 05:02:30.987909 | orchestrator | | e8f34c24b3d14f6d8f7ee50ed8dfbe00 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-08 05:02:30.987920 | orchestrator | | e93b12e1f60e46499f4dfacac01de5a7 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-08 05:02:30.987931 | orchestrator | | ed46ae26ed5f4769bbe2ee476daa3cfe | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-08 05:02:30.987942 | orchestrator | | efe86f0ff0ea4424aa50f308aabeb8b4 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-08 05:02:30.987952 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-08 05:02:31.290705 | orchestrator | 2026-02-08 05:02:31.290816 | orchestrator | # Cinder 2026-02-08 05:02:31.290833 | orchestrator | 2026-02-08 05:02:31.290844 | orchestrator | + echo 2026-02-08 05:02:31.290853 | orchestrator | + echo '# Cinder' 2026-02-08 05:02:31.290862 | orchestrator | + echo 2026-02-08 05:02:31.290871 | orchestrator | + openstack volume service list 2026-02-08 05:02:33.988172 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-08 05:02:33.988279 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-08 05:02:33.988294 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-08 05:02:33.988307 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-08T05:02:27.000000 | 2026-02-08 05:02:33.988318 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-08T05:02:26.000000 | 2026-02-08 05:02:33.988329 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-08T05:02:27.000000 | 2026-02-08 05:02:33.988341 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-08T05:02:26.000000 | 2026-02-08 05:02:33.988352 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-08T05:02:31.000000 | 2026-02-08 05:02:33.988363 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-08T05:02:31.000000 | 2026-02-08 05:02:33.988374 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-08T05:02:25.000000 | 2026-02-08 05:02:33.988385 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-08T05:02:26.000000 | 2026-02-08 05:02:33.988396 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-08T05:02:26.000000 | 2026-02-08 05:02:33.988437 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-08 05:02:34.282988 | orchestrator | 2026-02-08 05:02:34.283077 | orchestrator | # Neutron 2026-02-08 05:02:34.283089 | orchestrator | 2026-02-08 05:02:34.283098 | orchestrator | + echo 2026-02-08 05:02:34.283107 | orchestrator | + echo '# Neutron' 2026-02-08 05:02:34.283129 | orchestrator | + echo 2026-02-08 05:02:34.283144 | orchestrator | + openstack network agent list 2026-02-08 05:02:37.084663 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-08 05:02:37.084778 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-08 05:02:37.084792 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-08 05:02:37.084804 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-08 05:02:37.084814 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-08 05:02:37.084857 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-08 05:02:37.084869 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-08 05:02:37.084898 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-08 05:02:37.084911 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-08 05:02:37.084922 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-08 05:02:37.084933 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-08 05:02:37.084944 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-08 05:02:37.084955 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-08 05:02:37.414746 | orchestrator | + openstack network service provider list 2026-02-08 05:02:39.970309 | orchestrator | +---------------+------+---------+ 2026-02-08 05:02:39.970410 | orchestrator | | Service Type | Name | Default | 2026-02-08 05:02:39.970424 | orchestrator | +---------------+------+---------+ 2026-02-08 05:02:39.970434 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-08 05:02:39.970445 | orchestrator | +---------------+------+---------+ 2026-02-08 05:02:40.302426 | orchestrator | 2026-02-08 05:02:40.302530 | orchestrator | # Nova 2026-02-08 05:02:40.302543 | orchestrator | 2026-02-08 05:02:40.302550 | orchestrator | + echo 2026-02-08 05:02:40.302558 | orchestrator | + echo '# Nova' 2026-02-08 05:02:40.302567 | orchestrator | + echo 2026-02-08 05:02:40.302622 | orchestrator | + openstack compute service list 2026-02-08 05:02:42.982356 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-08 05:02:42.982501 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-08 05:02:42.982527 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-08 05:02:42.982541 | orchestrator | | 5f4df8b8-14d8-47b8-89ef-870051d4a1f1 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-08T05:02:37.000000 | 2026-02-08 05:02:42.982648 | orchestrator | | 9c8f8527-57ea-4445-8f49-6cec8e11c40e | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-08T05:02:39.000000 | 2026-02-08 05:02:42.982669 | orchestrator | | 5bfc57f4-4292-474f-9fcc-9eab4cd02fa9 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-08T05:02:40.000000 | 2026-02-08 05:02:42.982687 | orchestrator | | c5763463-d902-40a9-ae95-5f6de3a3167f | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-08T05:02:41.000000 | 2026-02-08 05:02:42.982707 | orchestrator | | 8d7938ff-cb54-4000-948a-eb2b619ea29f | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-08T05:02:42.000000 | 2026-02-08 05:02:42.982727 | orchestrator | | 91d435ad-7298-4f09-834d-8462ccd0de83 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-08T05:02:42.000000 | 2026-02-08 05:02:42.982747 | orchestrator | | 643d8f4e-6114-4d6b-a96a-a9268ed7c0f1 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-08T05:02:32.000000 | 2026-02-08 05:02:42.982765 | orchestrator | | ce9c9c80-16f1-4e41-a9ce-30fd1a3e63f7 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-08T05:02:33.000000 | 2026-02-08 05:02:42.982776 | orchestrator | | 5fac9b45-0c9d-4010-af35-73dc7aa0a553 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-08T05:02:33.000000 | 2026-02-08 05:02:42.982788 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-08 05:02:43.324215 | orchestrator | + openstack hypervisor list 2026-02-08 05:02:46.209111 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-08 05:02:46.209218 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-08 05:02:46.209236 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-08 05:02:46.209248 | orchestrator | | 90ce6f2a-f8d5-4cb3-9cb6-0651878e29d2 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-08 05:02:46.209260 | orchestrator | | 035adbac-936e-490f-b723-5aee446f0fb4 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-08 05:02:46.209271 | orchestrator | | 1bfc6c49-cde7-4452-8726-81363e4e3e42 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-08 05:02:46.209282 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-08 05:02:46.523480 | orchestrator | 2026-02-08 05:02:46.523612 | orchestrator | # Run OpenStack test play 2026-02-08 05:02:46.523627 | orchestrator | 2026-02-08 05:02:46.523639 | orchestrator | + echo 2026-02-08 05:02:46.523648 | orchestrator | + echo '# Run OpenStack test play' 2026-02-08 05:02:46.523658 | orchestrator | + echo 2026-02-08 05:02:46.523667 | orchestrator | + osism apply --environment openstack test 2026-02-08 05:02:48.663752 | orchestrator | 2026-02-08 05:02:48 | INFO  | Trying to run play test in environment openstack 2026-02-08 05:02:58.906845 | orchestrator | 2026-02-08 05:02:58 | INFO  | Task 5cb65f87-c4fd-424a-b259-209649cb3116 (test) was prepared for execution. 2026-02-08 05:02:58.906989 | orchestrator | 2026-02-08 05:02:58 | INFO  | It takes a moment until task 5cb65f87-c4fd-424a-b259-209649cb3116 (test) has been started and output is visible here. 2026-02-08 05:05:39.341356 | orchestrator | 2026-02-08 05:05:39.341471 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-08 05:05:39.341486 | orchestrator | 2026-02-08 05:05:39.341495 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-08 05:05:39.341503 | orchestrator | Sunday 08 February 2026 05:03:03 +0000 (0:00:00.087) 0:00:00.087 ******* 2026-02-08 05:05:39.341510 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341518 | orchestrator | 2026-02-08 05:05:39.341526 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-08 05:05:39.341533 | orchestrator | Sunday 08 February 2026 05:03:07 +0000 (0:00:03.826) 0:00:03.913 ******* 2026-02-08 05:05:39.341540 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341547 | orchestrator | 2026-02-08 05:05:39.341574 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-08 05:05:39.341581 | orchestrator | Sunday 08 February 2026 05:03:11 +0000 (0:00:04.272) 0:00:08.186 ******* 2026-02-08 05:05:39.341588 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341595 | orchestrator | 2026-02-08 05:05:39.341602 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-08 05:05:39.341609 | orchestrator | Sunday 08 February 2026 05:03:18 +0000 (0:00:07.238) 0:00:15.425 ******* 2026-02-08 05:05:39.341616 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341623 | orchestrator | 2026-02-08 05:05:39.341630 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-08 05:05:39.341637 | orchestrator | Sunday 08 February 2026 05:03:23 +0000 (0:00:04.273) 0:00:19.699 ******* 2026-02-08 05:05:39.341644 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341651 | orchestrator | 2026-02-08 05:05:39.341657 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-08 05:05:39.341664 | orchestrator | Sunday 08 February 2026 05:03:27 +0000 (0:00:04.464) 0:00:24.163 ******* 2026-02-08 05:05:39.341671 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-08 05:05:39.341679 | orchestrator | changed: [localhost] => (item=member) 2026-02-08 05:05:39.341687 | orchestrator | changed: [localhost] => (item=creator) 2026-02-08 05:05:39.341694 | orchestrator | 2026-02-08 05:05:39.341701 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-08 05:05:39.341707 | orchestrator | Sunday 08 February 2026 05:03:39 +0000 (0:00:12.102) 0:00:36.265 ******* 2026-02-08 05:05:39.341714 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341721 | orchestrator | 2026-02-08 05:05:39.341728 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-08 05:05:39.341735 | orchestrator | Sunday 08 February 2026 05:03:44 +0000 (0:00:04.458) 0:00:40.724 ******* 2026-02-08 05:05:39.341742 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341748 | orchestrator | 2026-02-08 05:05:39.341755 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-08 05:05:39.341762 | orchestrator | Sunday 08 February 2026 05:03:49 +0000 (0:00:05.024) 0:00:45.749 ******* 2026-02-08 05:05:39.341769 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341776 | orchestrator | 2026-02-08 05:05:39.341783 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-08 05:05:39.341789 | orchestrator | Sunday 08 February 2026 05:03:53 +0000 (0:00:04.478) 0:00:50.227 ******* 2026-02-08 05:05:39.341796 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341803 | orchestrator | 2026-02-08 05:05:39.341810 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-08 05:05:39.341816 | orchestrator | Sunday 08 February 2026 05:03:58 +0000 (0:00:04.381) 0:00:54.608 ******* 2026-02-08 05:05:39.341823 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341830 | orchestrator | 2026-02-08 05:05:39.341837 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-08 05:05:39.341844 | orchestrator | Sunday 08 February 2026 05:04:02 +0000 (0:00:04.192) 0:00:58.801 ******* 2026-02-08 05:05:39.341850 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341857 | orchestrator | 2026-02-08 05:05:39.341864 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-08 05:05:39.341871 | orchestrator | Sunday 08 February 2026 05:04:06 +0000 (0:00:03.977) 0:01:02.778 ******* 2026-02-08 05:05:39.341878 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341884 | orchestrator | 2026-02-08 05:05:39.341892 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-08 05:05:39.341899 | orchestrator | Sunday 08 February 2026 05:04:11 +0000 (0:00:04.857) 0:01:07.636 ******* 2026-02-08 05:05:39.341905 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341912 | orchestrator | 2026-02-08 05:05:39.341919 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-08 05:05:39.341926 | orchestrator | Sunday 08 February 2026 05:04:16 +0000 (0:00:05.504) 0:01:13.140 ******* 2026-02-08 05:05:39.341939 | orchestrator | changed: [localhost] 2026-02-08 05:05:39.341946 | orchestrator | 2026-02-08 05:05:39.341953 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-08 05:05:39.341960 | orchestrator | 2026-02-08 05:05:39.341966 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-08 05:05:39.341973 | orchestrator | Sunday 08 February 2026 05:04:26 +0000 (0:00:10.322) 0:01:23.463 ******* 2026-02-08 05:05:39.341980 | orchestrator | ok: [localhost] 2026-02-08 05:05:39.341987 | orchestrator | 2026-02-08 05:05:39.341994 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-08 05:05:39.342001 | orchestrator | Sunday 08 February 2026 05:04:31 +0000 (0:00:04.490) 0:01:27.954 ******* 2026-02-08 05:05:39.342008 | orchestrator | skipping: [localhost] 2026-02-08 05:05:39.342057 | orchestrator | 2026-02-08 05:05:39.342065 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-08 05:05:39.342072 | orchestrator | Sunday 08 February 2026 05:04:31 +0000 (0:00:00.047) 0:01:28.001 ******* 2026-02-08 05:05:39.342079 | orchestrator | skipping: [localhost] 2026-02-08 05:05:39.342085 | orchestrator | 2026-02-08 05:05:39.342092 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-08 05:05:39.342099 | orchestrator | Sunday 08 February 2026 05:04:31 +0000 (0:00:00.042) 0:01:28.044 ******* 2026-02-08 05:05:39.342118 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-08 05:05:39.342126 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-08 05:05:39.342146 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-08 05:05:39.342154 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-08 05:05:39.342161 | orchestrator | skipping: [localhost] => (item=test)  2026-02-08 05:05:39.342168 | orchestrator | skipping: [localhost] 2026-02-08 05:05:39.342174 | orchestrator | 2026-02-08 05:05:39.342185 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-08 05:05:39.342196 | orchestrator | Sunday 08 February 2026 05:04:31 +0000 (0:00:00.167) 0:01:28.212 ******* 2026-02-08 05:05:39.342206 | orchestrator | skipping: [localhost] 2026-02-08 05:05:39.342217 | orchestrator | 2026-02-08 05:05:39.342228 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-08 05:05:39.342239 | orchestrator | Sunday 08 February 2026 05:04:31 +0000 (0:00:00.153) 0:01:28.366 ******* 2026-02-08 05:05:39.342250 | orchestrator | changed: [localhost] => (item=test) 2026-02-08 05:05:39.342260 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-08 05:05:39.342267 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-08 05:05:39.342274 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-08 05:05:39.342281 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-08 05:05:39.342288 | orchestrator | 2026-02-08 05:05:39.342295 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-08 05:05:39.342301 | orchestrator | Sunday 08 February 2026 05:04:37 +0000 (0:00:05.264) 0:01:33.630 ******* 2026-02-08 05:05:39.342308 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-08 05:05:39.342317 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-08 05:05:39.342324 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-08 05:05:39.342331 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-08 05:05:39.342340 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j260945393858.3740', 'results_file': '/ansible/.ansible_async/j260945393858.3740', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-08 05:05:39.342349 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j546609265651.3765', 'results_file': '/ansible/.ansible_async/j546609265651.3765', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-08 05:05:39.342363 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j262051304020.3790', 'results_file': '/ansible/.ansible_async/j262051304020.3790', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-08 05:05:39.342370 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j274044062235.3815', 'results_file': '/ansible/.ansible_async/j274044062235.3815', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-08 05:05:39.342378 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j391288029511.3840', 'results_file': '/ansible/.ansible_async/j391288029511.3840', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-08 05:05:39.342385 | orchestrator | 2026-02-08 05:05:39.342392 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-08 05:05:39.342399 | orchestrator | Sunday 08 February 2026 05:05:24 +0000 (0:00:47.205) 0:02:20.836 ******* 2026-02-08 05:05:39.342406 | orchestrator | changed: [localhost] => (item=test) 2026-02-08 05:05:39.342413 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-08 05:05:39.342420 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-08 05:05:39.342427 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-08 05:05:39.342434 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-08 05:05:39.342457 | orchestrator | 2026-02-08 05:05:39.342465 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-08 05:05:39.342472 | orchestrator | Sunday 08 February 2026 05:05:29 +0000 (0:00:05.212) 0:02:26.048 ******* 2026-02-08 05:05:39.342478 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-08 05:05:39.342486 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j460332949790.3944', 'results_file': '/ansible/.ansible_async/j460332949790.3944', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-08 05:05:39.342493 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j669201996774.3969', 'results_file': '/ansible/.ansible_async/j669201996774.3969', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-08 05:05:39.342500 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j469796773614.3994', 'results_file': '/ansible/.ansible_async/j469796773614.3994', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-08 05:05:39.342520 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j568262613444.4019', 'results_file': '/ansible/.ansible_async/j568262613444.4019', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-08 05:06:21.546866 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j777918063635.4044', 'results_file': '/ansible/.ansible_async/j777918063635.4044', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-08 05:06:21.546962 | orchestrator | 2026-02-08 05:06:21.546974 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-08 05:06:21.546984 | orchestrator | Sunday 08 February 2026 05:05:39 +0000 (0:00:09.882) 0:02:35.931 ******* 2026-02-08 05:06:21.546992 | orchestrator | changed: [localhost] => (item=test) 2026-02-08 05:06:21.547001 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-08 05:06:21.547009 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-08 05:06:21.547016 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-08 05:06:21.547024 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-08 05:06:21.547031 | orchestrator | 2026-02-08 05:06:21.547039 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-08 05:06:21.547067 | orchestrator | Sunday 08 February 2026 05:05:44 +0000 (0:00:05.271) 0:02:41.203 ******* 2026-02-08 05:06:21.547075 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-08 05:06:21.547084 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j58203980957.4113', 'results_file': '/ansible/.ansible_async/j58203980957.4113', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-08 05:06:21.547092 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j181566092912.4138', 'results_file': '/ansible/.ansible_async/j181566092912.4138', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-08 05:06:21.547100 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j251120905051.4164', 'results_file': '/ansible/.ansible_async/j251120905051.4164', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-08 05:06:21.547108 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j569332253694.4190', 'results_file': '/ansible/.ansible_async/j569332253694.4190', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-08 05:06:21.547116 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j206759254253.4216', 'results_file': '/ansible/.ansible_async/j206759254253.4216', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-08 05:06:21.547123 | orchestrator | 2026-02-08 05:06:21.547131 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-08 05:06:21.547138 | orchestrator | Sunday 08 February 2026 05:05:55 +0000 (0:00:10.837) 0:02:52.041 ******* 2026-02-08 05:06:21.547146 | orchestrator | changed: [localhost] 2026-02-08 05:06:21.547154 | orchestrator | 2026-02-08 05:06:21.547161 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-08 05:06:21.547168 | orchestrator | Sunday 08 February 2026 05:06:01 +0000 (0:00:06.437) 0:02:58.478 ******* 2026-02-08 05:06:21.547176 | orchestrator | changed: [localhost] 2026-02-08 05:06:21.547183 | orchestrator | 2026-02-08 05:06:21.547191 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-08 05:06:21.547198 | orchestrator | Sunday 08 February 2026 05:06:15 +0000 (0:00:13.704) 0:03:12.182 ******* 2026-02-08 05:06:21.547205 | orchestrator | ok: [localhost] 2026-02-08 05:06:21.547213 | orchestrator | 2026-02-08 05:06:21.547221 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-08 05:06:21.547228 | orchestrator | Sunday 08 February 2026 05:06:21 +0000 (0:00:05.588) 0:03:17.771 ******* 2026-02-08 05:06:21.547235 | orchestrator | ok: [localhost] => { 2026-02-08 05:06:21.547243 | orchestrator |  "msg": "192.168.112.199" 2026-02-08 05:06:21.547251 | orchestrator | } 2026-02-08 05:06:21.547258 | orchestrator | 2026-02-08 05:06:21.547266 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:06:21.547274 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 05:06:21.547283 | orchestrator | 2026-02-08 05:06:21.547290 | orchestrator | 2026-02-08 05:06:21.547298 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:06:21.547305 | orchestrator | Sunday 08 February 2026 05:06:21 +0000 (0:00:00.048) 0:03:17.819 ******* 2026-02-08 05:06:21.547312 | orchestrator | =============================================================================== 2026-02-08 05:06:21.547320 | orchestrator | Wait for instance creation to complete --------------------------------- 47.21s 2026-02-08 05:06:21.547327 | orchestrator | Attach test volume ----------------------------------------------------- 13.70s 2026-02-08 05:06:21.547334 | orchestrator | Add member roles to user test ------------------------------------------ 12.10s 2026-02-08 05:06:21.547342 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.84s 2026-02-08 05:06:21.547368 | orchestrator | Create test router ----------------------------------------------------- 10.32s 2026-02-08 05:06:21.547376 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.88s 2026-02-08 05:06:21.547383 | orchestrator | Add manager role to user test-admin ------------------------------------- 7.24s 2026-02-08 05:06:21.547404 | orchestrator | Create test volume ------------------------------------------------------ 6.44s 2026-02-08 05:06:21.547443 | orchestrator | Create floating ip address ---------------------------------------------- 5.59s 2026-02-08 05:06:21.547455 | orchestrator | Create test subnet ------------------------------------------------------ 5.50s 2026-02-08 05:06:21.547465 | orchestrator | Add tag to instances ---------------------------------------------------- 5.27s 2026-02-08 05:06:21.547475 | orchestrator | Create test instances --------------------------------------------------- 5.27s 2026-02-08 05:06:21.547484 | orchestrator | Add metadata to instances ----------------------------------------------- 5.21s 2026-02-08 05:06:21.547493 | orchestrator | Create ssh security group ----------------------------------------------- 5.02s 2026-02-08 05:06:21.547501 | orchestrator | Create test network ----------------------------------------------------- 4.86s 2026-02-08 05:06:21.547511 | orchestrator | Get test server group --------------------------------------------------- 4.49s 2026-02-08 05:06:21.547519 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.48s 2026-02-08 05:06:21.547528 | orchestrator | Create test user -------------------------------------------------------- 4.46s 2026-02-08 05:06:21.547536 | orchestrator | Create test server group ------------------------------------------------ 4.46s 2026-02-08 05:06:21.547556 | orchestrator | Create icmp security group ---------------------------------------------- 4.38s 2026-02-08 05:06:21.939323 | orchestrator | + server_list 2026-02-08 05:06:21.939390 | orchestrator | + openstack --os-cloud test server list 2026-02-08 05:06:25.821267 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-08 05:06:25.821350 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-08 05:06:25.821358 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-08 05:06:25.821365 | orchestrator | | 5cdafc11-a89a-46bf-9baa-91ef8e1d55fa | test-4 | ACTIVE | test=192.168.112.127, 192.168.200.106 | N/A (booted from volume) | SCS-1L-1 | 2026-02-08 05:06:25.821371 | orchestrator | | 523b0a98-f11b-46e2-bea5-b2db93de683a | test-3 | ACTIVE | test=192.168.112.186, 192.168.200.58 | N/A (booted from volume) | SCS-1L-1 | 2026-02-08 05:06:25.821377 | orchestrator | | 89376a79-1959-4159-9b61-e058fd424e31 | test-2 | ACTIVE | test=192.168.112.156, 192.168.200.175 | N/A (booted from volume) | SCS-1L-1 | 2026-02-08 05:06:25.821383 | orchestrator | | 59b12d1d-5f43-4bd9-9879-87ea0d9fb02b | test-1 | ACTIVE | test=192.168.112.135, 192.168.200.20 | N/A (booted from volume) | SCS-1L-1 | 2026-02-08 05:06:25.821389 | orchestrator | | 2042c693-e0ee-431a-95d3-483fbefc2047 | test | ACTIVE | test=192.168.112.199, 192.168.200.121 | N/A (booted from volume) | SCS-1L-1 | 2026-02-08 05:06:25.821395 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-08 05:06:26.136750 | orchestrator | + openstack --os-cloud test server show test 2026-02-08 05:06:29.386360 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:29.386527 | orchestrator | | Field | Value | 2026-02-08 05:06:29.386582 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:29.386602 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-08 05:06:29.386612 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-08 05:06:29.386622 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-08 05:06:29.386630 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-08 05:06:29.386638 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-08 05:06:29.386646 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-08 05:06:29.386670 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-08 05:06:29.386679 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-08 05:06:29.386704 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-08 05:06:29.386713 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-08 05:06:29.386725 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-08 05:06:29.386734 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-08 05:06:29.386742 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-08 05:06:29.386751 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-08 05:06:29.386759 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-08 05:06:29.386767 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-08T05:05:06.000000 | 2026-02-08 05:06:29.386782 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-08 05:06:29.386802 | orchestrator | | accessIPv4 | | 2026-02-08 05:06:29.386807 | orchestrator | | accessIPv6 | | 2026-02-08 05:06:29.386813 | orchestrator | | addresses | test=192.168.112.199, 192.168.200.121 | 2026-02-08 05:06:29.386822 | orchestrator | | config_drive | | 2026-02-08 05:06:29.386828 | orchestrator | | created | 2026-02-08T05:04:40Z | 2026-02-08 05:06:29.386837 | orchestrator | | description | None | 2026-02-08 05:06:29.386845 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-08 05:06:29.386853 | orchestrator | | hostId | ecc2b7654219214f3da87982b8c9c168f0a4726f290131611181ad06 | 2026-02-08 05:06:29.386861 | orchestrator | | host_status | None | 2026-02-08 05:06:29.386876 | orchestrator | | id | 2042c693-e0ee-431a-95d3-483fbefc2047 | 2026-02-08 05:06:29.386895 | orchestrator | | image | N/A (booted from volume) | 2026-02-08 05:06:29.386930 | orchestrator | | key_name | test | 2026-02-08 05:06:29.386941 | orchestrator | | locked | False | 2026-02-08 05:06:29.386950 | orchestrator | | locked_reason | None | 2026-02-08 05:06:29.386959 | orchestrator | | name | test | 2026-02-08 05:06:29.386967 | orchestrator | | pinned_availability_zone | None | 2026-02-08 05:06:29.386976 | orchestrator | | progress | 0 | 2026-02-08 05:06:29.386985 | orchestrator | | project_id | cb6ce246106c45a59d318467712d7ab8 | 2026-02-08 05:06:29.386993 | orchestrator | | properties | hostname='test' | 2026-02-08 05:06:29.387021 | orchestrator | | security_groups | name='ssh' | 2026-02-08 05:06:29.387031 | orchestrator | | | name='icmp' | 2026-02-08 05:06:29.387040 | orchestrator | | server_groups | None | 2026-02-08 05:06:29.387049 | orchestrator | | status | ACTIVE | 2026-02-08 05:06:29.387061 | orchestrator | | tags | test | 2026-02-08 05:06:29.387070 | orchestrator | | trusted_image_certificates | None | 2026-02-08 05:06:29.387078 | orchestrator | | updated | 2026-02-08T05:05:30Z | 2026-02-08 05:06:29.387086 | orchestrator | | user_id | be88f638d7df4c5387728b39740f9063 | 2026-02-08 05:06:29.387092 | orchestrator | | volumes_attached | delete_on_termination='True', id='d32ac7b8-2bdd-4ee5-9ba5-238b2702d098' | 2026-02-08 05:06:29.387103 | orchestrator | | | delete_on_termination='False', id='fa465ace-53b4-48b5-8be1-eb6cddf48591' | 2026-02-08 05:06:29.391661 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:29.727215 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-08 05:06:32.956642 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:32.956743 | orchestrator | | Field | Value | 2026-02-08 05:06:32.956760 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:32.956781 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-08 05:06:32.956795 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-08 05:06:32.956807 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-08 05:06:32.956819 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-08 05:06:32.956851 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-08 05:06:32.956864 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-08 05:06:32.956892 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-08 05:06:32.956905 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-08 05:06:32.956917 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-08 05:06:32.956934 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-08 05:06:32.956947 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-08 05:06:32.956959 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-08 05:06:32.956971 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-08 05:06:32.956989 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-08 05:06:32.957001 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-08 05:06:32.957013 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-08T05:05:06.000000 | 2026-02-08 05:06:32.957032 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-08 05:06:32.957044 | orchestrator | | accessIPv4 | | 2026-02-08 05:06:32.957057 | orchestrator | | accessIPv6 | | 2026-02-08 05:06:32.957073 | orchestrator | | addresses | test=192.168.112.135, 192.168.200.20 | 2026-02-08 05:06:32.957086 | orchestrator | | config_drive | | 2026-02-08 05:06:32.957098 | orchestrator | | created | 2026-02-08T05:04:41Z | 2026-02-08 05:06:32.957110 | orchestrator | | description | None | 2026-02-08 05:06:32.957129 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-08 05:06:32.957141 | orchestrator | | hostId | ecc2b7654219214f3da87982b8c9c168f0a4726f290131611181ad06 | 2026-02-08 05:06:32.957153 | orchestrator | | host_status | None | 2026-02-08 05:06:32.957172 | orchestrator | | id | 59b12d1d-5f43-4bd9-9879-87ea0d9fb02b | 2026-02-08 05:06:32.957185 | orchestrator | | image | N/A (booted from volume) | 2026-02-08 05:06:32.957199 | orchestrator | | key_name | test | 2026-02-08 05:06:32.957216 | orchestrator | | locked | False | 2026-02-08 05:06:32.957230 | orchestrator | | locked_reason | None | 2026-02-08 05:06:32.957243 | orchestrator | | name | test-1 | 2026-02-08 05:06:32.957263 | orchestrator | | pinned_availability_zone | None | 2026-02-08 05:06:32.957275 | orchestrator | | progress | 0 | 2026-02-08 05:06:32.957289 | orchestrator | | project_id | cb6ce246106c45a59d318467712d7ab8 | 2026-02-08 05:06:32.957301 | orchestrator | | properties | hostname='test-1' | 2026-02-08 05:06:32.957318 | orchestrator | | security_groups | name='ssh' | 2026-02-08 05:06:32.957333 | orchestrator | | | name='icmp' | 2026-02-08 05:06:32.957347 | orchestrator | | server_groups | None | 2026-02-08 05:06:32.957359 | orchestrator | | status | ACTIVE | 2026-02-08 05:06:32.957371 | orchestrator | | tags | test | 2026-02-08 05:06:32.957389 | orchestrator | | trusted_image_certificates | None | 2026-02-08 05:06:32.957402 | orchestrator | | updated | 2026-02-08T05:05:31Z | 2026-02-08 05:06:32.957435 | orchestrator | | user_id | be88f638d7df4c5387728b39740f9063 | 2026-02-08 05:06:32.957447 | orchestrator | | volumes_attached | delete_on_termination='True', id='7dda07bc-cf6e-451a-ba14-434ca3cda874' | 2026-02-08 05:06:32.962841 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:33.333331 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-08 05:06:36.346786 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:36.346883 | orchestrator | | Field | Value | 2026-02-08 05:06:36.346914 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:36.346928 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-08 05:06:36.346958 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-08 05:06:36.346968 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-08 05:06:36.346976 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-08 05:06:36.346985 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-08 05:06:36.346993 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-08 05:06:36.347016 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-08 05:06:36.347026 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-08 05:06:36.347034 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-08 05:06:36.347043 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-08 05:06:36.347055 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-08 05:06:36.347070 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-08 05:06:36.347078 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-08 05:06:36.347087 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-08 05:06:36.347095 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-08 05:06:36.347103 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-08T05:05:09.000000 | 2026-02-08 05:06:36.347117 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-08 05:06:36.347126 | orchestrator | | accessIPv4 | | 2026-02-08 05:06:36.347135 | orchestrator | | accessIPv6 | | 2026-02-08 05:06:36.347143 | orchestrator | | addresses | test=192.168.112.156, 192.168.200.175 | 2026-02-08 05:06:36.347164 | orchestrator | | config_drive | | 2026-02-08 05:06:36.347173 | orchestrator | | created | 2026-02-08T05:04:42Z | 2026-02-08 05:06:36.347181 | orchestrator | | description | None | 2026-02-08 05:06:36.347190 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-08 05:06:36.347198 | orchestrator | | hostId | ecc2b7654219214f3da87982b8c9c168f0a4726f290131611181ad06 | 2026-02-08 05:06:36.347206 | orchestrator | | host_status | None | 2026-02-08 05:06:36.347220 | orchestrator | | id | 89376a79-1959-4159-9b61-e058fd424e31 | 2026-02-08 05:06:36.347228 | orchestrator | | image | N/A (booted from volume) | 2026-02-08 05:06:36.347237 | orchestrator | | key_name | test | 2026-02-08 05:06:36.347250 | orchestrator | | locked | False | 2026-02-08 05:06:36.347263 | orchestrator | | locked_reason | None | 2026-02-08 05:06:36.347271 | orchestrator | | name | test-2 | 2026-02-08 05:06:36.347280 | orchestrator | | pinned_availability_zone | None | 2026-02-08 05:06:36.347288 | orchestrator | | progress | 0 | 2026-02-08 05:06:36.347296 | orchestrator | | project_id | cb6ce246106c45a59d318467712d7ab8 | 2026-02-08 05:06:36.347305 | orchestrator | | properties | hostname='test-2' | 2026-02-08 05:06:36.347319 | orchestrator | | security_groups | name='ssh' | 2026-02-08 05:06:36.347328 | orchestrator | | | name='icmp' | 2026-02-08 05:06:36.347342 | orchestrator | | server_groups | None | 2026-02-08 05:06:36.347361 | orchestrator | | status | ACTIVE | 2026-02-08 05:06:36.347370 | orchestrator | | tags | test | 2026-02-08 05:06:36.347379 | orchestrator | | trusted_image_certificates | None | 2026-02-08 05:06:36.347387 | orchestrator | | updated | 2026-02-08T05:05:32Z | 2026-02-08 05:06:36.347396 | orchestrator | | user_id | be88f638d7df4c5387728b39740f9063 | 2026-02-08 05:06:36.347454 | orchestrator | | volumes_attached | delete_on_termination='True', id='a7f7c71a-6bbc-4082-a180-f32337ce062b' | 2026-02-08 05:06:36.350231 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:36.694895 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-08 05:06:39.797504 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:39.797667 | orchestrator | | Field | Value | 2026-02-08 05:06:39.797692 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:39.797730 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-08 05:06:39.797752 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-08 05:06:39.797771 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-08 05:06:39.797791 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-08 05:06:39.797812 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-08 05:06:39.797825 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-08 05:06:39.797856 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-08 05:06:39.797869 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-08 05:06:39.797891 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-08 05:06:39.797902 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-08 05:06:39.797914 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-08 05:06:39.797926 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-08 05:06:39.797938 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-08 05:06:39.797949 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-08 05:06:39.797961 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-08 05:06:39.797972 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-08T05:05:09.000000 | 2026-02-08 05:06:39.797990 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-08 05:06:39.798009 | orchestrator | | accessIPv4 | | 2026-02-08 05:06:39.798091 | orchestrator | | accessIPv6 | | 2026-02-08 05:06:39.798104 | orchestrator | | addresses | test=192.168.112.186, 192.168.200.58 | 2026-02-08 05:06:39.798571 | orchestrator | | config_drive | | 2026-02-08 05:06:39.798591 | orchestrator | | created | 2026-02-08T05:04:43Z | 2026-02-08 05:06:39.798603 | orchestrator | | description | None | 2026-02-08 05:06:39.798614 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-08 05:06:39.798626 | orchestrator | | hostId | ecc2b7654219214f3da87982b8c9c168f0a4726f290131611181ad06 | 2026-02-08 05:06:39.798637 | orchestrator | | host_status | None | 2026-02-08 05:06:39.798680 | orchestrator | | id | 523b0a98-f11b-46e2-bea5-b2db93de683a | 2026-02-08 05:06:39.798707 | orchestrator | | image | N/A (booted from volume) | 2026-02-08 05:06:39.798727 | orchestrator | | key_name | test | 2026-02-08 05:06:39.798745 | orchestrator | | locked | False | 2026-02-08 05:06:39.798763 | orchestrator | | locked_reason | None | 2026-02-08 05:06:39.798783 | orchestrator | | name | test-3 | 2026-02-08 05:06:39.798803 | orchestrator | | pinned_availability_zone | None | 2026-02-08 05:06:39.798822 | orchestrator | | progress | 0 | 2026-02-08 05:06:39.798841 | orchestrator | | project_id | cb6ce246106c45a59d318467712d7ab8 | 2026-02-08 05:06:39.798867 | orchestrator | | properties | hostname='test-3' | 2026-02-08 05:06:39.798889 | orchestrator | | security_groups | name='ssh' | 2026-02-08 05:06:39.798907 | orchestrator | | | name='icmp' | 2026-02-08 05:06:39.798920 | orchestrator | | server_groups | None | 2026-02-08 05:06:39.798931 | orchestrator | | status | ACTIVE | 2026-02-08 05:06:39.798943 | orchestrator | | tags | test | 2026-02-08 05:06:39.798954 | orchestrator | | trusted_image_certificates | None | 2026-02-08 05:06:39.798966 | orchestrator | | updated | 2026-02-08T05:05:33Z | 2026-02-08 05:06:39.798977 | orchestrator | | user_id | be88f638d7df4c5387728b39740f9063 | 2026-02-08 05:06:39.798989 | orchestrator | | volumes_attached | delete_on_termination='True', id='5a4c08a6-3186-4ce9-a962-3774bf5662ac' | 2026-02-08 05:06:39.802544 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:40.129983 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-08 05:06:43.237490 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:43.237579 | orchestrator | | Field | Value | 2026-02-08 05:06:43.237589 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:43.237594 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-08 05:06:43.237599 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-08 05:06:43.237603 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-08 05:06:43.237607 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-08 05:06:43.237611 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-08 05:06:43.237629 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-08 05:06:43.237643 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-08 05:06:43.237648 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-08 05:06:43.237655 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-08 05:06:43.237659 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-08 05:06:43.237663 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-08 05:06:43.237667 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-08 05:06:43.237671 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-08 05:06:43.237675 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-08 05:06:43.237683 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-08 05:06:43.237687 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-08T05:05:09.000000 | 2026-02-08 05:06:43.237694 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-08 05:06:43.237698 | orchestrator | | accessIPv4 | | 2026-02-08 05:06:43.237705 | orchestrator | | accessIPv6 | | 2026-02-08 05:06:43.237709 | orchestrator | | addresses | test=192.168.112.127, 192.168.200.106 | 2026-02-08 05:06:43.237713 | orchestrator | | config_drive | | 2026-02-08 05:06:43.237718 | orchestrator | | created | 2026-02-08T05:04:44Z | 2026-02-08 05:06:43.237722 | orchestrator | | description | None | 2026-02-08 05:06:43.237730 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-08 05:06:43.237734 | orchestrator | | hostId | 751ce4efea8fb418830e009dd9308c8d1be77e35f33dcc6e4d2c984f | 2026-02-08 05:06:43.237738 | orchestrator | | host_status | None | 2026-02-08 05:06:43.237746 | orchestrator | | id | 5cdafc11-a89a-46bf-9baa-91ef8e1d55fa | 2026-02-08 05:06:43.237750 | orchestrator | | image | N/A (booted from volume) | 2026-02-08 05:06:43.237757 | orchestrator | | key_name | test | 2026-02-08 05:06:43.237761 | orchestrator | | locked | False | 2026-02-08 05:06:43.237765 | orchestrator | | locked_reason | None | 2026-02-08 05:06:43.237769 | orchestrator | | name | test-4 | 2026-02-08 05:06:43.237776 | orchestrator | | pinned_availability_zone | None | 2026-02-08 05:06:43.237780 | orchestrator | | progress | 0 | 2026-02-08 05:06:43.237784 | orchestrator | | project_id | cb6ce246106c45a59d318467712d7ab8 | 2026-02-08 05:06:43.237788 | orchestrator | | properties | hostname='test-4' | 2026-02-08 05:06:43.237795 | orchestrator | | security_groups | name='ssh' | 2026-02-08 05:06:43.237802 | orchestrator | | | name='icmp' | 2026-02-08 05:06:43.237806 | orchestrator | | server_groups | None | 2026-02-08 05:06:43.237811 | orchestrator | | status | ACTIVE | 2026-02-08 05:06:43.237815 | orchestrator | | tags | test | 2026-02-08 05:06:43.237819 | orchestrator | | trusted_image_certificates | None | 2026-02-08 05:06:43.237826 | orchestrator | | updated | 2026-02-08T05:05:33Z | 2026-02-08 05:06:43.237830 | orchestrator | | user_id | be88f638d7df4c5387728b39740f9063 | 2026-02-08 05:06:43.237834 | orchestrator | | volumes_attached | delete_on_termination='True', id='327616b0-fb3d-458d-b4ee-c4f28a3a0d35' | 2026-02-08 05:06:43.243393 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-08 05:06:43.578998 | orchestrator | + server_ping 2026-02-08 05:06:43.579642 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-08 05:06:43.579827 | orchestrator | ++ tr -d '\r' 2026-02-08 05:06:46.566635 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-08 05:06:46.566966 | orchestrator | + ping -c3 192.168.112.186 2026-02-08 05:06:46.581057 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2026-02-08 05:06:46.581102 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=6.81 ms 2026-02-08 05:06:47.577670 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=2.41 ms 2026-02-08 05:06:48.578982 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=1.76 ms 2026-02-08 05:06:48.579509 | orchestrator | 2026-02-08 05:06:48.579545 | orchestrator | --- 192.168.112.186 ping statistics --- 2026-02-08 05:06:48.579555 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-08 05:06:48.579562 | orchestrator | rtt min/avg/max/mdev = 1.756/3.657/6.809/2.244 ms 2026-02-08 05:06:48.579746 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-08 05:06:48.579761 | orchestrator | + ping -c3 192.168.112.135 2026-02-08 05:06:48.593929 | orchestrator | PING 192.168.112.135 (192.168.112.135) 56(84) bytes of data. 2026-02-08 05:06:48.594002 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=1 ttl=63 time=7.88 ms 2026-02-08 05:06:49.589742 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=2 ttl=63 time=2.83 ms 2026-02-08 05:06:50.590098 | orchestrator | 64 bytes from 192.168.112.135: icmp_seq=3 ttl=63 time=1.98 ms 2026-02-08 05:06:50.590303 | orchestrator | 2026-02-08 05:06:50.590327 | orchestrator | --- 192.168.112.135 ping statistics --- 2026-02-08 05:06:50.590340 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-08 05:06:50.590351 | orchestrator | rtt min/avg/max/mdev = 1.982/4.230/7.878/2.602 ms 2026-02-08 05:06:50.590373 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-08 05:06:50.590437 | orchestrator | + ping -c3 192.168.112.156 2026-02-08 05:06:50.602327 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-02-08 05:06:50.602360 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=6.83 ms 2026-02-08 05:06:51.599278 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.15 ms 2026-02-08 05:06:52.601169 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=2.14 ms 2026-02-08 05:06:52.601252 | orchestrator | 2026-02-08 05:06:52.601262 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-02-08 05:06:52.601270 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-08 05:06:52.601339 | orchestrator | rtt min/avg/max/mdev = 2.135/3.706/6.834/2.211 ms 2026-02-08 05:06:52.601352 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-08 05:06:52.601359 | orchestrator | + ping -c3 192.168.112.199 2026-02-08 05:06:52.613010 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2026-02-08 05:06:52.613056 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=9.27 ms 2026-02-08 05:06:53.607660 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.20 ms 2026-02-08 05:06:54.609600 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=2.17 ms 2026-02-08 05:06:54.783310 | orchestrator | 2026-02-08 05:06:54.783381 | orchestrator | --- 192.168.112.199 ping statistics --- 2026-02-08 05:06:54.783422 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-08 05:06:54.783432 | orchestrator | rtt min/avg/max/mdev = 2.171/4.544/9.267/3.339 ms 2026-02-08 05:06:54.783442 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-08 05:06:54.783451 | orchestrator | + ping -c3 192.168.112.127 2026-02-08 05:06:54.783479 | orchestrator | PING 192.168.112.127 (192.168.112.127) 56(84) bytes of data. 2026-02-08 05:06:54.783488 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=1 ttl=63 time=9.97 ms 2026-02-08 05:06:55.618878 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=2 ttl=63 time=2.23 ms 2026-02-08 05:06:56.620636 | orchestrator | 64 bytes from 192.168.112.127: icmp_seq=3 ttl=63 time=1.91 ms 2026-02-08 05:06:56.620845 | orchestrator | 2026-02-08 05:06:56.620864 | orchestrator | --- 192.168.112.127 ping statistics --- 2026-02-08 05:06:56.620878 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-08 05:06:56.620890 | orchestrator | rtt min/avg/max/mdev = 1.911/4.703/9.969/3.725 ms 2026-02-08 05:06:56.620913 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-08 05:06:56.793822 | orchestrator | ok: Runtime: 0:09:12.493736 2026-02-08 05:06:56.849556 | 2026-02-08 05:06:56.849691 | TASK [Run tempest] 2026-02-08 05:06:57.385073 | orchestrator | skipping: Conditional result was False 2026-02-08 05:06:57.403553 | 2026-02-08 05:06:57.403781 | TASK [Check prometheus alert status] 2026-02-08 05:06:57.942246 | orchestrator | skipping: Conditional result was False 2026-02-08 05:06:57.955364 | 2026-02-08 05:06:57.955527 | PLAY [Upgrade testbed] 2026-02-08 05:06:57.966645 | 2026-02-08 05:06:57.966773 | TASK [Print next ceph version] 2026-02-08 05:06:58.048237 | orchestrator | ok 2026-02-08 05:06:58.058432 | 2026-02-08 05:06:58.058575 | TASK [Print next openstack version] 2026-02-08 05:06:58.129367 | orchestrator | ok 2026-02-08 05:06:58.142566 | 2026-02-08 05:06:58.142699 | TASK [Print next manager version] 2026-02-08 05:06:58.222036 | orchestrator | ok 2026-02-08 05:06:58.232030 | 2026-02-08 05:06:58.232145 | TASK [Set cloud fact (Zuul deployment)] 2026-02-08 05:06:58.302613 | orchestrator | ok 2026-02-08 05:06:58.315131 | 2026-02-08 05:06:58.315268 | TASK [Set cloud fact (local deployment)] 2026-02-08 05:06:58.350609 | orchestrator | skipping: Conditional result was False 2026-02-08 05:06:58.364356 | 2026-02-08 05:06:58.364518 | TASK [Fetch manager address] 2026-02-08 05:06:58.639273 | orchestrator | ok 2026-02-08 05:06:58.649571 | 2026-02-08 05:06:58.649704 | TASK [Set manager_host address] 2026-02-08 05:06:58.728738 | orchestrator | ok 2026-02-08 05:06:58.739577 | 2026-02-08 05:06:58.739695 | TASK [Run upgrade] 2026-02-08 05:06:59.422680 | orchestrator | + set -e 2026-02-08 05:06:59.422815 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-08 05:06:59.422833 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-08 05:06:59.422849 | orchestrator | + CEPH_VERSION=reef 2026-02-08 05:06:59.422858 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-08 05:06:59.422867 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-08 05:06:59.422883 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-08 05:06:59.431892 | orchestrator | + set -e 2026-02-08 05:06:59.432402 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 05:06:59.432416 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 05:06:59.432427 | orchestrator | ++ INTERACTIVE=false 2026-02-08 05:06:59.432434 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 05:06:59.432446 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 05:06:59.433157 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-08 05:06:59.478971 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-08 05:06:59.479277 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-08 05:06:59.517021 | orchestrator | 2026-02-08 05:06:59.517109 | orchestrator | # UPGRADE MANAGER 2026-02-08 05:06:59.517125 | orchestrator | 2026-02-08 05:06:59.517132 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-08 05:06:59.517142 | orchestrator | + echo 2026-02-08 05:06:59.517150 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-08 05:06:59.517160 | orchestrator | + echo 2026-02-08 05:06:59.517168 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-08 05:06:59.517177 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-08 05:06:59.517185 | orchestrator | + CEPH_VERSION=reef 2026-02-08 05:06:59.517193 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-08 05:06:59.517201 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-08 05:06:59.517209 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-08 05:06:59.521863 | orchestrator | + set -e 2026-02-08 05:06:59.521966 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-08 05:06:59.521991 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-08 05:06:59.529487 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-08 05:06:59.529543 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-08 05:06:59.535419 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-08 05:06:59.540296 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-08 05:06:59.550298 | orchestrator | /opt/configuration ~ 2026-02-08 05:06:59.550346 | orchestrator | + set -e 2026-02-08 05:06:59.550354 | orchestrator | + pushd /opt/configuration 2026-02-08 05:06:59.550361 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-08 05:06:59.550376 | orchestrator | + source /opt/venv/bin/activate 2026-02-08 05:06:59.551804 | orchestrator | ++ deactivate nondestructive 2026-02-08 05:06:59.551819 | orchestrator | ++ '[' -n '' ']' 2026-02-08 05:06:59.551825 | orchestrator | ++ '[' -n '' ']' 2026-02-08 05:06:59.551832 | orchestrator | ++ hash -r 2026-02-08 05:06:59.551880 | orchestrator | ++ '[' -n '' ']' 2026-02-08 05:06:59.551898 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-08 05:06:59.551933 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-08 05:06:59.551944 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-08 05:06:59.552016 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-08 05:06:59.552029 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-08 05:06:59.552093 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-08 05:06:59.552107 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-08 05:06:59.552118 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 05:06:59.552157 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 05:06:59.552165 | orchestrator | ++ export PATH 2026-02-08 05:06:59.552173 | orchestrator | ++ '[' -n '' ']' 2026-02-08 05:06:59.552246 | orchestrator | ++ '[' -z '' ']' 2026-02-08 05:06:59.552259 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-08 05:06:59.552287 | orchestrator | ++ PS1='(venv) ' 2026-02-08 05:06:59.552295 | orchestrator | ++ export PS1 2026-02-08 05:06:59.552301 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-08 05:06:59.552307 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-08 05:06:59.552667 | orchestrator | ++ hash -r 2026-02-08 05:06:59.552682 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-08 05:07:00.730434 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-08 05:07:00.731335 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-08 05:07:00.732561 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-08 05:07:00.733900 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-08 05:07:00.735125 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-08 05:07:00.745219 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-08 05:07:00.746975 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-08 05:07:00.748069 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-08 05:07:00.750123 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-08 05:07:00.785372 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-08 05:07:00.786842 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-08 05:07:00.788962 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-08 05:07:00.790352 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-08 05:07:00.794815 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-08 05:07:01.045061 | orchestrator | ++ which gilt 2026-02-08 05:07:01.046122 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-08 05:07:01.046135 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-08 05:07:01.304566 | orchestrator | osism.cfg-generics: 2026-02-08 05:07:01.415519 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-08 05:07:01.416101 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-08 05:07:01.417283 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-08 05:07:01.417311 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-08 05:07:02.332211 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-08 05:07:02.340293 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-08 05:07:02.784663 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-08 05:07:02.835621 | orchestrator | ~ 2026-02-08 05:07:02.835726 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-08 05:07:02.835739 | orchestrator | + deactivate 2026-02-08 05:07:02.835747 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-08 05:07:02.835755 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 05:07:02.835761 | orchestrator | + export PATH 2026-02-08 05:07:02.835767 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-08 05:07:02.835773 | orchestrator | + '[' -n '' ']' 2026-02-08 05:07:02.835779 | orchestrator | + hash -r 2026-02-08 05:07:02.835785 | orchestrator | + '[' -n '' ']' 2026-02-08 05:07:02.835790 | orchestrator | + unset VIRTUAL_ENV 2026-02-08 05:07:02.835796 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-08 05:07:02.835802 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-08 05:07:02.835807 | orchestrator | + unset -f deactivate 2026-02-08 05:07:02.835813 | orchestrator | + popd 2026-02-08 05:07:02.837526 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-08 05:07:02.837582 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-08 05:07:02.845564 | orchestrator | + set -e 2026-02-08 05:07:02.845658 | orchestrator | + NAMESPACE=kolla/release 2026-02-08 05:07:02.845688 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-08 05:07:02.851042 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-08 05:07:02.854839 | orchestrator | /opt/configuration ~ 2026-02-08 05:07:02.855118 | orchestrator | + set -e 2026-02-08 05:07:02.855148 | orchestrator | + pushd /opt/configuration 2026-02-08 05:07:02.855161 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-08 05:07:02.855185 | orchestrator | + source /opt/venv/bin/activate 2026-02-08 05:07:02.855197 | orchestrator | ++ deactivate nondestructive 2026-02-08 05:07:02.855209 | orchestrator | ++ '[' -n '' ']' 2026-02-08 05:07:02.855220 | orchestrator | ++ '[' -n '' ']' 2026-02-08 05:07:02.855231 | orchestrator | ++ hash -r 2026-02-08 05:07:02.855242 | orchestrator | ++ '[' -n '' ']' 2026-02-08 05:07:02.855302 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-08 05:07:02.855326 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-08 05:07:02.855346 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-08 05:07:02.855366 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-08 05:07:02.855419 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-08 05:07:02.855433 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-08 05:07:02.855450 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-08 05:07:02.855461 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 05:07:02.855476 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 05:07:02.855487 | orchestrator | ++ export PATH 2026-02-08 05:07:02.855498 | orchestrator | ++ '[' -n '' ']' 2026-02-08 05:07:02.855509 | orchestrator | ++ '[' -z '' ']' 2026-02-08 05:07:02.855520 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-08 05:07:02.855530 | orchestrator | ++ PS1='(venv) ' 2026-02-08 05:07:02.855541 | orchestrator | ++ export PS1 2026-02-08 05:07:02.855552 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-08 05:07:02.855563 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-08 05:07:02.855574 | orchestrator | ++ hash -r 2026-02-08 05:07:02.855591 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-08 05:07:03.381061 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-08 05:07:03.382226 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-08 05:07:03.383812 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-08 05:07:03.385017 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-08 05:07:03.386195 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-08 05:07:03.396437 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-08 05:07:03.397959 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-08 05:07:03.399057 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-08 05:07:03.400342 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-08 05:07:03.435236 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-08 05:07:03.436694 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-08 05:07:03.438653 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-08 05:07:03.440054 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-08 05:07:03.444239 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-08 05:07:03.671101 | orchestrator | ++ which gilt 2026-02-08 05:07:03.673972 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-08 05:07:03.674043 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-08 05:07:03.866147 | orchestrator | osism.cfg-generics: 2026-02-08 05:07:03.943999 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-08 05:07:03.944098 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-08 05:07:03.944300 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-08 05:07:03.944367 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-08 05:07:04.437812 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-08 05:07:04.447186 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-08 05:07:04.756190 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-08 05:07:04.811736 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-08 05:07:04.811861 | orchestrator | + deactivate 2026-02-08 05:07:04.811906 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-08 05:07:04.811922 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-08 05:07:04.811934 | orchestrator | + export PATH 2026-02-08 05:07:04.811946 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-08 05:07:04.811958 | orchestrator | + '[' -n '' ']' 2026-02-08 05:07:04.811969 | orchestrator | + hash -r 2026-02-08 05:07:04.811980 | orchestrator | + '[' -n '' ']' 2026-02-08 05:07:04.811992 | orchestrator | + unset VIRTUAL_ENV 2026-02-08 05:07:04.812004 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-08 05:07:04.812029 | orchestrator | ~ 2026-02-08 05:07:04.812041 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-08 05:07:04.812053 | orchestrator | + unset -f deactivate 2026-02-08 05:07:04.812064 | orchestrator | + popd 2026-02-08 05:07:04.814050 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-08 05:07:04.873099 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-08 05:07:04.874354 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-08 05:07:04.971190 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 05:07:04.971273 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-08 05:07:04.976802 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-08 05:07:04.984486 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-08 05:07:05.043371 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-08 05:07:05.044251 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-08 05:07:05.137210 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-08 05:07:05.137315 | orchestrator | ++ echo true 2026-02-08 05:07:05.137813 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-08 05:07:05.138755 | orchestrator | +++ semver 2024.2 2024.2 2026-02-08 05:07:05.206081 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-08 05:07:05.207022 | orchestrator | +++ semver 2024.2 2025.1 2026-02-08 05:07:05.264314 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-08 05:07:05.264417 | orchestrator | ++ echo false 2026-02-08 05:07:05.264954 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-08 05:07:05.265010 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-08 05:07:05.265016 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-08 05:07:05.265028 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-08 05:07:05.265034 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-08 05:07:05.270442 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-08 05:07:05.270502 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-08 05:07:05.286847 | orchestrator | export RABBITMQ3TO4=true 2026-02-08 05:07:05.292155 | orchestrator | + osism update manager 2026-02-08 05:07:10.996254 | orchestrator | Collecting uv 2026-02-08 05:07:11.082746 | orchestrator | Downloading uv-0.10.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-08 05:07:11.101287 | orchestrator | Downloading uv-0.10.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.8 MB) 2026-02-08 05:07:11.864860 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.8/22.8 MB 34.4 MB/s eta 0:00:00 2026-02-08 05:07:11.918808 | orchestrator | Installing collected packages: uv 2026-02-08 05:07:12.390638 | orchestrator | Successfully installed uv-0.10.0 2026-02-08 05:07:12.970860 | orchestrator | Resolved 11 packages in 265ms 2026-02-08 05:07:12.991917 | orchestrator | Downloading cryptography (4.2MiB) 2026-02-08 05:07:13.002817 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-08 05:07:13.006881 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-08 05:07:13.008128 | orchestrator | Downloading ansible (54.5MiB) 2026-02-08 05:07:13.354007 | orchestrator | Downloaded netaddr 2026-02-08 05:07:13.494513 | orchestrator | Downloaded cryptography 2026-02-08 05:07:13.509180 | orchestrator | Downloaded ansible-core 2026-02-08 05:07:19.530822 | orchestrator | Downloaded ansible 2026-02-08 05:07:19.531885 | orchestrator | Prepared 11 packages in 6.55s 2026-02-08 05:07:19.991581 | orchestrator | Installed 11 packages in 459ms 2026-02-08 05:07:19.991684 | orchestrator | + ansible==11.11.0 2026-02-08 05:07:19.991692 | orchestrator | + ansible-core==2.18.13 2026-02-08 05:07:19.991699 | orchestrator | + cffi==2.0.0 2026-02-08 05:07:19.991705 | orchestrator | + cryptography==46.0.4 2026-02-08 05:07:19.991711 | orchestrator | + jinja2==3.1.6 2026-02-08 05:07:19.991716 | orchestrator | + markupsafe==3.0.3 2026-02-08 05:07:19.991721 | orchestrator | + netaddr==1.3.0 2026-02-08 05:07:19.991726 | orchestrator | + packaging==26.0 2026-02-08 05:07:19.991731 | orchestrator | + pycparser==3.0 2026-02-08 05:07:19.991736 | orchestrator | + pyyaml==6.0.3 2026-02-08 05:07:19.991742 | orchestrator | + resolvelib==1.0.1 2026-02-08 05:07:21.084887 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-200424dm21qntr/tmpdswm5pwq/ansible-collection-servicesmq7g73zn'... 2026-02-08 05:07:22.571470 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-08 05:07:22.571551 | orchestrator | Already on 'main' 2026-02-08 05:07:23.065301 | orchestrator | Starting galaxy collection install process 2026-02-08 05:07:23.065467 | orchestrator | Process install dependency map 2026-02-08 05:07:23.065486 | orchestrator | Starting collection install process 2026-02-08 05:07:23.065499 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-08 05:07:23.065515 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-08 05:07:23.065534 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-08 05:07:23.555271 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-200442f7ji2fnl/tmpkw0pxdod/ansible-playbooks-managerwjmfhopl'... 2026-02-08 05:07:24.185239 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-08 05:07:24.185337 | orchestrator | Already on 'main' 2026-02-08 05:07:24.456323 | orchestrator | Starting galaxy collection install process 2026-02-08 05:07:24.456495 | orchestrator | Process install dependency map 2026-02-08 05:07:24.456515 | orchestrator | Starting collection install process 2026-02-08 05:07:24.456529 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-08 05:07:24.456542 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-08 05:07:24.456554 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-08 05:07:25.094177 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-08 05:07:25.094258 | orchestrator | -vvvv to see details 2026-02-08 05:07:25.544631 | orchestrator | 2026-02-08 05:07:25.544736 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-08 05:07:25.544754 | orchestrator | 2026-02-08 05:07:25.544766 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-08 05:07:29.311205 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:29.311296 | orchestrator | 2026-02-08 05:07:29.311306 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-08 05:07:29.377030 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 05:07:29.377106 | orchestrator | 2026-02-08 05:07:29.377132 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-08 05:07:31.207292 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:31.207474 | orchestrator | 2026-02-08 05:07:31.207496 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-08 05:07:31.280013 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:31.280192 | orchestrator | 2026-02-08 05:07:31.280213 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-08 05:07:31.350311 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-08 05:07:31.350444 | orchestrator | 2026-02-08 05:07:31.350468 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-08 05:07:35.423359 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-08 05:07:35.423477 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-08 05:07:35.423487 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-08 05:07:35.423504 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-08 05:07:35.423511 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-08 05:07:35.423517 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-08 05:07:35.423524 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-08 05:07:35.423531 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-08 05:07:35.423537 | orchestrator | 2026-02-08 05:07:35.423545 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-08 05:07:36.502962 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:36.504022 | orchestrator | 2026-02-08 05:07:36.504103 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-08 05:07:37.427300 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:37.427450 | orchestrator | 2026-02-08 05:07:37.427469 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-08 05:07:37.524279 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-08 05:07:37.524409 | orchestrator | 2026-02-08 05:07:37.524438 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-08 05:07:39.331807 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-08 05:07:39.331896 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-08 05:07:39.331908 | orchestrator | 2026-02-08 05:07:39.331918 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-08 05:07:40.259974 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:40.260046 | orchestrator | 2026-02-08 05:07:40.260054 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-08 05:07:40.314693 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:07:40.314771 | orchestrator | 2026-02-08 05:07:40.314782 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-08 05:07:40.403795 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-08 05:07:40.403913 | orchestrator | 2026-02-08 05:07:40.403931 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-08 05:07:41.243716 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:41.243789 | orchestrator | 2026-02-08 05:07:41.243797 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-08 05:07:41.310450 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-08 05:07:41.310554 | orchestrator | 2026-02-08 05:07:41.310573 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-08 05:07:43.132824 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-08 05:07:43.132939 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-08 05:07:43.132957 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:43.132972 | orchestrator | 2026-02-08 05:07:43.132984 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-08 05:07:44.045798 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:44.045893 | orchestrator | 2026-02-08 05:07:44.045908 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-08 05:07:44.106564 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:07:44.106662 | orchestrator | 2026-02-08 05:07:44.106678 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-08 05:07:44.213525 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-08 05:07:44.213599 | orchestrator | 2026-02-08 05:07:44.213610 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-08 05:07:44.888770 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:44.888870 | orchestrator | 2026-02-08 05:07:44.888887 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-08 05:07:45.485322 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:45.485487 | orchestrator | 2026-02-08 05:07:45.485506 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-08 05:07:47.378714 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-08 05:07:47.378812 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-08 05:07:47.378826 | orchestrator | 2026-02-08 05:07:47.378839 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-08 05:07:48.522212 | orchestrator | changed: [testbed-manager] 2026-02-08 05:07:48.575744 | orchestrator | 2026-02-08 05:07:48.575831 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-08 05:07:49.091311 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:49.091435 | orchestrator | 2026-02-08 05:07:49.091446 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-08 05:07:49.660480 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:49.660580 | orchestrator | 2026-02-08 05:07:49.660618 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-08 05:07:49.715153 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:07:49.715261 | orchestrator | 2026-02-08 05:07:49.715284 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-08 05:07:49.801737 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-08 05:07:49.801836 | orchestrator | 2026-02-08 05:07:49.801851 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-08 05:07:49.853444 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:49.853534 | orchestrator | 2026-02-08 05:07:49.853549 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-08 05:07:52.670429 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-08 05:07:52.670534 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-08 05:07:52.670550 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-08 05:07:52.670563 | orchestrator | 2026-02-08 05:07:52.670576 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-08 05:07:53.696999 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:53.697832 | orchestrator | 2026-02-08 05:07:53.697848 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-08 05:07:54.712808 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:54.712914 | orchestrator | 2026-02-08 05:07:54.712933 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-08 05:07:55.614704 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:55.614811 | orchestrator | 2026-02-08 05:07:55.614829 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-08 05:07:55.700109 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-08 05:07:55.700204 | orchestrator | 2026-02-08 05:07:55.700221 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-08 05:07:55.763169 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:55.763271 | orchestrator | 2026-02-08 05:07:55.763286 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-08 05:07:56.757855 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-08 05:07:56.757948 | orchestrator | 2026-02-08 05:07:56.757963 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-08 05:07:56.838194 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-08 05:07:56.838281 | orchestrator | 2026-02-08 05:07:56.838297 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-08 05:07:57.836187 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:57.836286 | orchestrator | 2026-02-08 05:07:57.836304 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-08 05:07:58.927685 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:58.928859 | orchestrator | 2026-02-08 05:07:58.928902 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-08 05:07:59.016322 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:07:59.016480 | orchestrator | 2026-02-08 05:07:59.016500 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-08 05:07:59.069761 | orchestrator | ok: [testbed-manager] 2026-02-08 05:07:59.069898 | orchestrator | 2026-02-08 05:07:59.069931 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-08 05:08:00.291082 | orchestrator | changed: [testbed-manager] 2026-02-08 05:08:00.291171 | orchestrator | 2026-02-08 05:08:00.291183 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-08 05:09:07.444613 | orchestrator | changed: [testbed-manager] 2026-02-08 05:09:07.444708 | orchestrator | 2026-02-08 05:09:07.444721 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-08 05:09:08.743633 | orchestrator | ok: [testbed-manager] 2026-02-08 05:09:08.743738 | orchestrator | 2026-02-08 05:09:08.743754 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-08 05:09:08.809349 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:09:08.809440 | orchestrator | 2026-02-08 05:09:08.809454 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-08 05:09:09.645760 | orchestrator | ok: [testbed-manager] 2026-02-08 05:09:09.645862 | orchestrator | 2026-02-08 05:09:09.645883 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-08 05:09:09.700132 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:09:09.700225 | orchestrator | 2026-02-08 05:09:09.700238 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-08 05:09:09.700249 | orchestrator | 2026-02-08 05:09:09.700258 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-08 05:09:25.509510 | orchestrator | changed: [testbed-manager] 2026-02-08 05:09:25.509648 | orchestrator | 2026-02-08 05:09:25.509669 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-08 05:10:25.558925 | orchestrator | Pausing for 60 seconds 2026-02-08 05:10:25.559106 | orchestrator | changed: [testbed-manager] 2026-02-08 05:10:25.559132 | orchestrator | 2026-02-08 05:10:25.559155 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-08 05:10:25.595858 | orchestrator | ok: [testbed-manager] 2026-02-08 05:10:25.595948 | orchestrator | 2026-02-08 05:10:25.595963 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-08 05:10:29.151103 | orchestrator | changed: [testbed-manager] 2026-02-08 05:10:29.151223 | orchestrator | 2026-02-08 05:10:29.151240 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-08 05:11:31.744966 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-08 05:11:31.745084 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-08 05:11:31.745101 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-08 05:11:31.745114 | orchestrator | changed: [testbed-manager] 2026-02-08 05:11:31.745129 | orchestrator | 2026-02-08 05:11:31.745141 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-08 05:11:42.885661 | orchestrator | changed: [testbed-manager] 2026-02-08 05:11:42.885771 | orchestrator | 2026-02-08 05:11:42.885786 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-08 05:11:42.977861 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-08 05:11:42.977989 | orchestrator | 2026-02-08 05:11:42.978007 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-08 05:11:42.978077 | orchestrator | 2026-02-08 05:11:42.978089 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-08 05:11:43.054432 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:11:43.054560 | orchestrator | 2026-02-08 05:11:43.054587 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-08 05:11:43.136789 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-08 05:11:43.136886 | orchestrator | 2026-02-08 05:11:43.136923 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-08 05:11:44.230486 | orchestrator | changed: [testbed-manager] 2026-02-08 05:11:44.230575 | orchestrator | 2026-02-08 05:11:44.230589 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-08 05:11:47.536961 | orchestrator | ok: [testbed-manager] 2026-02-08 05:11:47.537080 | orchestrator | 2026-02-08 05:11:47.537098 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-08 05:11:47.621625 | orchestrator | ok: [testbed-manager] => { 2026-02-08 05:11:47.621715 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-08 05:11:47.621731 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-08 05:11:47.621742 | orchestrator | "Checking running containers against expected versions...", 2026-02-08 05:11:47.621755 | orchestrator | "", 2026-02-08 05:11:47.621766 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-08 05:11:47.621777 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-08 05:11:47.621789 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.621801 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-08 05:11:47.621812 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.621823 | orchestrator | "", 2026-02-08 05:11:47.621834 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-08 05:11:47.621846 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-08 05:11:47.621857 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.621868 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-08 05:11:47.621879 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.621889 | orchestrator | "", 2026-02-08 05:11:47.621900 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-08 05:11:47.621911 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-08 05:11:47.621922 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.621936 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-08 05:11:47.621954 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.621972 | orchestrator | "", 2026-02-08 05:11:47.621990 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-08 05:11:47.622009 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-08 05:11:47.622099 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622119 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-08 05:11:47.622130 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622141 | orchestrator | "", 2026-02-08 05:11:47.622152 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-08 05:11:47.622163 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-08 05:11:47.622174 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622185 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-08 05:11:47.622196 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622206 | orchestrator | "", 2026-02-08 05:11:47.622217 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-08 05:11:47.622253 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622264 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622275 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622286 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622296 | orchestrator | "", 2026-02-08 05:11:47.622307 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-08 05:11:47.622318 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-08 05:11:47.622329 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622340 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-08 05:11:47.622350 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622361 | orchestrator | "", 2026-02-08 05:11:47.622372 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-08 05:11:47.622425 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-08 05:11:47.622437 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622460 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-08 05:11:47.622472 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622483 | orchestrator | "", 2026-02-08 05:11:47.622493 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-08 05:11:47.622504 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-08 05:11:47.622515 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622526 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-08 05:11:47.622537 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622548 | orchestrator | "", 2026-02-08 05:11:47.622563 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-08 05:11:47.622575 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-08 05:11:47.622586 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622597 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-08 05:11:47.622608 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622619 | orchestrator | "", 2026-02-08 05:11:47.622630 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-08 05:11:47.622641 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622651 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622662 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622673 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622684 | orchestrator | "", 2026-02-08 05:11:47.622695 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-08 05:11:47.622705 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622716 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622727 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622737 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622748 | orchestrator | "", 2026-02-08 05:11:47.622759 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-08 05:11:47.622770 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622781 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622792 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622802 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622813 | orchestrator | "", 2026-02-08 05:11:47.622824 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-08 05:11:47.622835 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622846 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622857 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622887 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622899 | orchestrator | "", 2026-02-08 05:11:47.622910 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-08 05:11:47.622921 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622940 | orchestrator | " Enabled: true", 2026-02-08 05:11:47.622951 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-08 05:11:47.622962 | orchestrator | " Status: ✅ MATCH", 2026-02-08 05:11:47.622973 | orchestrator | "", 2026-02-08 05:11:47.622984 | orchestrator | "=== Summary ===", 2026-02-08 05:11:47.622994 | orchestrator | "Errors (version mismatches): 0", 2026-02-08 05:11:47.623005 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-08 05:11:47.623016 | orchestrator | "", 2026-02-08 05:11:47.623027 | orchestrator | "✅ All running containers match expected versions!" 2026-02-08 05:11:47.623038 | orchestrator | ] 2026-02-08 05:11:47.623049 | orchestrator | } 2026-02-08 05:11:47.623062 | orchestrator | 2026-02-08 05:11:47.623073 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-08 05:11:47.686570 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:11:47.686671 | orchestrator | 2026-02-08 05:11:47.686688 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:11:47.686702 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-08 05:11:47.686714 | orchestrator | 2026-02-08 05:12:00.202538 | orchestrator | 2026-02-08 05:12:00 | INFO  | Task 6aea6815-72d5-4c75-8969-7dc8410b4c07 (sync inventory) is running in background. Output coming soon. 2026-02-08 05:12:28.909831 | orchestrator | 2026-02-08 05:12:01 | INFO  | Starting group_vars file reorganization 2026-02-08 05:12:28.909949 | orchestrator | 2026-02-08 05:12:01 | INFO  | Moved 0 file(s) to their respective directories 2026-02-08 05:12:28.909965 | orchestrator | 2026-02-08 05:12:01 | INFO  | Group_vars file reorganization completed 2026-02-08 05:12:28.909997 | orchestrator | 2026-02-08 05:12:04 | INFO  | Starting variable preparation from inventory 2026-02-08 05:12:28.910008 | orchestrator | 2026-02-08 05:12:07 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-08 05:12:28.910076 | orchestrator | 2026-02-08 05:12:07 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-08 05:12:28.910088 | orchestrator | 2026-02-08 05:12:07 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-08 05:12:28.910098 | orchestrator | 2026-02-08 05:12:07 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-08 05:12:28.910108 | orchestrator | 2026-02-08 05:12:07 | INFO  | Variable preparation completed 2026-02-08 05:12:28.910118 | orchestrator | 2026-02-08 05:12:09 | INFO  | Starting inventory overwrite handling 2026-02-08 05:12:28.910128 | orchestrator | 2026-02-08 05:12:09 | INFO  | Handling group overwrites in 99-overwrite 2026-02-08 05:12:28.910138 | orchestrator | 2026-02-08 05:12:09 | INFO  | Removing group frr:children from 60-generic 2026-02-08 05:12:28.910148 | orchestrator | 2026-02-08 05:12:09 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-08 05:12:28.910158 | orchestrator | 2026-02-08 05:12:09 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-08 05:12:28.910168 | orchestrator | 2026-02-08 05:12:09 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-08 05:12:28.910177 | orchestrator | 2026-02-08 05:12:09 | INFO  | Handling group overwrites in 20-roles 2026-02-08 05:12:28.910187 | orchestrator | 2026-02-08 05:12:09 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-08 05:12:28.910197 | orchestrator | 2026-02-08 05:12:09 | INFO  | Removed 5 group(s) in total 2026-02-08 05:12:28.910207 | orchestrator | 2026-02-08 05:12:09 | INFO  | Inventory overwrite handling completed 2026-02-08 05:12:28.910217 | orchestrator | 2026-02-08 05:12:11 | INFO  | Starting merge of inventory files 2026-02-08 05:12:28.910226 | orchestrator | 2026-02-08 05:12:11 | INFO  | Inventory files merged successfully 2026-02-08 05:12:28.910258 | orchestrator | 2026-02-08 05:12:16 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-08 05:12:28.910269 | orchestrator | 2026-02-08 05:12:27 | INFO  | Successfully wrote ClusterShell configuration 2026-02-08 05:12:29.303582 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-08 05:12:29.303657 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-08 05:12:29.303665 | orchestrator | + local max_attempts=60 2026-02-08 05:12:29.303672 | orchestrator | + local name=kolla-ansible 2026-02-08 05:12:29.303677 | orchestrator | + local attempt_num=1 2026-02-08 05:12:29.303964 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-08 05:12:29.339904 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-08 05:12:29.339992 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-08 05:12:29.340005 | orchestrator | + local max_attempts=60 2026-02-08 05:12:29.340060 | orchestrator | + local name=osism-ansible 2026-02-08 05:12:29.340070 | orchestrator | + local attempt_num=1 2026-02-08 05:12:29.340141 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-08 05:12:29.378689 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-08 05:12:29.378755 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-08 05:12:29.562208 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-08 05:12:29.562296 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-08 05:12:29.562314 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-08 05:12:29.562328 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-08 05:12:29.562344 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-08 05:12:29.562356 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-08 05:12:29.562367 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-08 05:12:29.562378 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-08 05:12:29.562389 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 20 seconds ago 2026-02-08 05:12:29.562400 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-08 05:12:29.562451 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-08 05:12:29.562462 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-08 05:12:29.562473 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-08 05:12:29.562510 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-08 05:12:29.562522 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-08 05:12:29.562533 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-08 05:12:29.567963 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-08 05:12:29.567989 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-08 05:12:29.568001 | orchestrator | + osism apply facts 2026-02-08 05:12:41.897051 | orchestrator | 2026-02-08 05:12:41 | INFO  | Task 0f58b294-9268-4478-b22c-653d4b9d865b (facts) was prepared for execution. 2026-02-08 05:12:41.897161 | orchestrator | 2026-02-08 05:12:41 | INFO  | It takes a moment until task 0f58b294-9268-4478-b22c-653d4b9d865b (facts) has been started and output is visible here. 2026-02-08 05:13:00.868333 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-08 05:13:00.868517 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-08 05:13:00.868562 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-08 05:13:00.868581 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-08 05:13:00.868617 | orchestrator | 2026-02-08 05:13:00.868638 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-08 05:13:00.868656 | orchestrator | 2026-02-08 05:13:00.868673 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-08 05:13:00.868692 | orchestrator | Sunday 08 February 2026 05:12:48 +0000 (0:00:01.839) 0:00:01.839 ******* 2026-02-08 05:13:00.868709 | orchestrator | ok: [testbed-manager] 2026-02-08 05:13:00.868728 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:13:00.868745 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:13:00.868761 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:13:00.868778 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:13:00.868795 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:13:00.868812 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:13:00.868829 | orchestrator | 2026-02-08 05:13:00.868847 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-08 05:13:00.868865 | orchestrator | Sunday 08 February 2026 05:12:50 +0000 (0:00:02.187) 0:00:04.026 ******* 2026-02-08 05:13:00.868883 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:13:00.868903 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:13:00.868946 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:13:00.868965 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:13:00.868990 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:13:00.869013 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:13:00.869032 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:13:00.869053 | orchestrator | 2026-02-08 05:13:00.869074 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-08 05:13:00.869095 | orchestrator | 2026-02-08 05:13:00.869115 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-08 05:13:00.869136 | orchestrator | Sunday 08 February 2026 05:12:52 +0000 (0:00:01.833) 0:00:05.860 ******* 2026-02-08 05:13:00.869156 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:13:00.869175 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:13:00.869196 | orchestrator | ok: [testbed-manager] 2026-02-08 05:13:00.869215 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:13:00.869264 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:13:00.869282 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:13:00.869299 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:13:00.869317 | orchestrator | 2026-02-08 05:13:00.869337 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-08 05:13:00.869356 | orchestrator | 2026-02-08 05:13:00.869375 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-08 05:13:00.869394 | orchestrator | Sunday 08 February 2026 05:12:58 +0000 (0:00:06.094) 0:00:11.954 ******* 2026-02-08 05:13:00.869413 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:13:00.869459 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:13:00.869477 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:13:00.869496 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:13:00.869515 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:13:00.869533 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:13:00.869550 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:13:00.869567 | orchestrator | 2026-02-08 05:13:00.869586 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:13:00.869605 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 05:13:00.869625 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 05:13:00.869644 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 05:13:00.869665 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 05:13:00.869683 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 05:13:00.869702 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 05:13:00.869722 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 05:13:00.869743 | orchestrator | 2026-02-08 05:13:00.869763 | orchestrator | 2026-02-08 05:13:00.869782 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:13:00.869802 | orchestrator | Sunday 08 February 2026 05:13:00 +0000 (0:00:02.024) 0:00:13.978 ******* 2026-02-08 05:13:00.869822 | orchestrator | =============================================================================== 2026-02-08 05:13:00.869842 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.09s 2026-02-08 05:13:00.869861 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.19s 2026-02-08 05:13:00.869879 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.02s 2026-02-08 05:13:00.869899 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.83s 2026-02-08 05:13:01.401946 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-08 05:13:01.510387 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 05:13:01.511466 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-08 05:13:01.552941 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-08 05:13:01.553049 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-08 05:13:01.557792 | orchestrator | + set -e 2026-02-08 05:13:01.557860 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-08 05:13:01.557874 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-08 05:13:01.565790 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-08 05:13:01.577471 | orchestrator | + set -e 2026-02-08 05:13:01.577542 | orchestrator | 2026-02-08 05:13:01.577593 | orchestrator | # UPGRADE SERVICES 2026-02-08 05:13:01.577611 | orchestrator | 2026-02-08 05:13:01.577629 | orchestrator | + echo 2026-02-08 05:13:01.577647 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-08 05:13:01.577667 | orchestrator | + echo 2026-02-08 05:13:01.577685 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 05:13:01.579131 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 05:13:01.579167 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 05:13:01.579179 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 05:13:01.579189 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 05:13:01.579200 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 05:13:01.579213 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 05:13:01.579223 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 05:13:01.579234 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 05:13:01.579245 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 05:13:01.579256 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 05:13:01.579267 | orchestrator | ++ export ARA=false 2026-02-08 05:13:01.579278 | orchestrator | ++ ARA=false 2026-02-08 05:13:01.579289 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 05:13:01.579299 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 05:13:01.579310 | orchestrator | ++ export TEMPEST=false 2026-02-08 05:13:01.579321 | orchestrator | ++ TEMPEST=false 2026-02-08 05:13:01.579331 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 05:13:01.579342 | orchestrator | ++ IS_ZUUL=true 2026-02-08 05:13:01.579353 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 05:13:01.579364 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 05:13:01.579374 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 05:13:01.579385 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 05:13:01.579395 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 05:13:01.579406 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 05:13:01.579417 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 05:13:01.579472 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 05:13:01.579484 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 05:13:01.579496 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 05:13:01.579506 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-08 05:13:01.579517 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-08 05:13:01.579548 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-08 05:13:01.579559 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-08 05:13:01.579570 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-08 05:13:01.589078 | orchestrator | + set -e 2026-02-08 05:13:01.589105 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 05:13:01.589869 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 05:13:01.589893 | orchestrator | ++ INTERACTIVE=false 2026-02-08 05:13:01.589904 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 05:13:01.589923 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 05:13:01.590555 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 05:13:01.590575 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 05:13:01.590587 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 05:13:01.590597 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 05:13:01.590608 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 05:13:01.590619 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 05:13:01.590631 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 05:13:01.590642 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 05:13:01.590653 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 05:13:01.590664 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 05:13:01.590675 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 05:13:01.590686 | orchestrator | ++ export ARA=false 2026-02-08 05:13:01.590697 | orchestrator | ++ ARA=false 2026-02-08 05:13:01.590707 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 05:13:01.590718 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 05:13:01.590729 | orchestrator | ++ export TEMPEST=false 2026-02-08 05:13:01.590739 | orchestrator | ++ TEMPEST=false 2026-02-08 05:13:01.590750 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 05:13:01.590761 | orchestrator | ++ IS_ZUUL=true 2026-02-08 05:13:01.590773 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 05:13:01.590784 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 05:13:01.590795 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 05:13:01.590806 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 05:13:01.590816 | orchestrator | 2026-02-08 05:13:01.590828 | orchestrator | # PULL IMAGES 2026-02-08 05:13:01.590838 | orchestrator | 2026-02-08 05:13:01.590849 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 05:13:01.590860 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 05:13:01.590871 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 05:13:01.590882 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 05:13:01.590914 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 05:13:01.590925 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 05:13:01.590936 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-08 05:13:01.590946 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-08 05:13:01.590957 | orchestrator | + echo 2026-02-08 05:13:01.590968 | orchestrator | + echo '# PULL IMAGES' 2026-02-08 05:13:01.590979 | orchestrator | + echo 2026-02-08 05:13:01.591798 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-08 05:13:01.648207 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 05:13:01.648300 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-08 05:13:03.621362 | orchestrator | 2026-02-08 05:13:03 | INFO  | Trying to run play pull-images in environment custom 2026-02-08 05:13:13.765045 | orchestrator | 2026-02-08 05:13:13 | INFO  | Task 29ce956f-2745-4675-ab83-3579c07961e5 (pull-images) was prepared for execution. 2026-02-08 05:13:13.765153 | orchestrator | 2026-02-08 05:13:13 | INFO  | Task 29ce956f-2745-4675-ab83-3579c07961e5 is running in background. No more output. Check ARA for logs. 2026-02-08 05:13:14.138243 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-08 05:13:14.144310 | orchestrator | + set -e 2026-02-08 05:13:14.144384 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 05:13:14.144396 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 05:13:14.144405 | orchestrator | ++ INTERACTIVE=false 2026-02-08 05:13:14.144413 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 05:13:14.144421 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 05:13:14.144428 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-08 05:13:14.146208 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-08 05:13:14.156803 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-08 05:13:14.156889 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-08 05:13:14.157914 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-08 05:13:14.203682 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-08 05:13:14.203771 | orchestrator | + osism apply frr 2026-02-08 05:13:26.492291 | orchestrator | 2026-02-08 05:13:26 | INFO  | Task 024b7c14-054a-4014-aa10-5b629e728eb3 (frr) was prepared for execution. 2026-02-08 05:13:26.492406 | orchestrator | 2026-02-08 05:13:26 | INFO  | It takes a moment until task 024b7c14-054a-4014-aa10-5b629e728eb3 (frr) has been started and output is visible here. 2026-02-08 05:13:57.818668 | orchestrator | 2026-02-08 05:13:57.818802 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-08 05:13:57.818826 | orchestrator | 2026-02-08 05:13:57.818843 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-08 05:13:57.818860 | orchestrator | Sunday 08 February 2026 05:13:34 +0000 (0:00:03.023) 0:00:03.023 ******* 2026-02-08 05:13:57.818875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 05:13:57.818892 | orchestrator | 2026-02-08 05:13:57.818907 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-08 05:13:57.818923 | orchestrator | Sunday 08 February 2026 05:13:35 +0000 (0:00:01.698) 0:00:04.721 ******* 2026-02-08 05:13:57.818938 | orchestrator | ok: [testbed-manager] 2026-02-08 05:13:57.818954 | orchestrator | 2026-02-08 05:13:57.818969 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-08 05:13:57.818985 | orchestrator | Sunday 08 February 2026 05:13:38 +0000 (0:00:02.076) 0:00:06.798 ******* 2026-02-08 05:13:57.819000 | orchestrator | ok: [testbed-manager] 2026-02-08 05:13:57.819014 | orchestrator | 2026-02-08 05:13:57.819033 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-08 05:13:57.819052 | orchestrator | Sunday 08 February 2026 05:13:40 +0000 (0:00:02.632) 0:00:09.430 ******* 2026-02-08 05:13:57.819071 | orchestrator | ok: [testbed-manager] 2026-02-08 05:13:57.819089 | orchestrator | 2026-02-08 05:13:57.819108 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-08 05:13:57.819127 | orchestrator | Sunday 08 February 2026 05:13:42 +0000 (0:00:01.881) 0:00:11.312 ******* 2026-02-08 05:13:57.819145 | orchestrator | ok: [testbed-manager] 2026-02-08 05:13:57.819192 | orchestrator | 2026-02-08 05:13:57.819213 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-08 05:13:57.819233 | orchestrator | Sunday 08 February 2026 05:13:44 +0000 (0:00:01.889) 0:00:13.201 ******* 2026-02-08 05:13:57.819252 | orchestrator | ok: [testbed-manager] 2026-02-08 05:13:57.819270 | orchestrator | 2026-02-08 05:13:57.819289 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-08 05:13:57.819309 | orchestrator | Sunday 08 February 2026 05:13:47 +0000 (0:00:02.605) 0:00:15.807 ******* 2026-02-08 05:13:57.819328 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:13:57.819348 | orchestrator | 2026-02-08 05:13:57.819367 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-08 05:13:57.819385 | orchestrator | Sunday 08 February 2026 05:13:48 +0000 (0:00:01.132) 0:00:16.940 ******* 2026-02-08 05:13:57.819404 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:13:57.819422 | orchestrator | 2026-02-08 05:13:57.819441 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-08 05:13:57.819489 | orchestrator | Sunday 08 February 2026 05:13:49 +0000 (0:00:01.230) 0:00:18.170 ******* 2026-02-08 05:13:57.819508 | orchestrator | ok: [testbed-manager] 2026-02-08 05:13:57.819528 | orchestrator | 2026-02-08 05:13:57.819547 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-08 05:13:57.819565 | orchestrator | Sunday 08 February 2026 05:13:51 +0000 (0:00:01.945) 0:00:20.116 ******* 2026-02-08 05:13:57.819583 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-08 05:13:57.819614 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-08 05:13:57.819636 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-08 05:13:57.819654 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-08 05:13:57.819673 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-08 05:13:57.819691 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-08 05:13:57.819711 | orchestrator | 2026-02-08 05:13:57.819729 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-08 05:13:57.819748 | orchestrator | Sunday 08 February 2026 05:13:54 +0000 (0:00:03.603) 0:00:23.719 ******* 2026-02-08 05:13:57.819759 | orchestrator | ok: [testbed-manager] 2026-02-08 05:13:57.819770 | orchestrator | 2026-02-08 05:13:57.819781 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:13:57.819792 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 05:13:57.819803 | orchestrator | 2026-02-08 05:13:57.819814 | orchestrator | 2026-02-08 05:13:57.819824 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:13:57.819835 | orchestrator | Sunday 08 February 2026 05:13:57 +0000 (0:00:02.510) 0:00:26.229 ******* 2026-02-08 05:13:57.819869 | orchestrator | =============================================================================== 2026-02-08 05:13:57.819888 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.60s 2026-02-08 05:13:57.819906 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.63s 2026-02-08 05:13:57.819924 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.61s 2026-02-08 05:13:57.819942 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.51s 2026-02-08 05:13:57.819960 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.08s 2026-02-08 05:13:57.819976 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.95s 2026-02-08 05:13:57.819991 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.89s 2026-02-08 05:13:57.820022 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.88s 2026-02-08 05:13:57.820066 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.70s 2026-02-08 05:13:57.820087 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.23s 2026-02-08 05:13:57.820105 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.13s 2026-02-08 05:13:58.199981 | orchestrator | + osism apply kubernetes 2026-02-08 05:14:00.453968 | orchestrator | 2026-02-08 05:14:00 | INFO  | Task 0501d58e-7488-458b-815d-16ea94fb5469 (kubernetes) was prepared for execution. 2026-02-08 05:14:00.454139 | orchestrator | 2026-02-08 05:14:00 | INFO  | It takes a moment until task 0501d58e-7488-458b-815d-16ea94fb5469 (kubernetes) has been started and output is visible here. 2026-02-08 05:14:44.068639 | orchestrator | 2026-02-08 05:14:44.068750 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-08 05:14:44.068766 | orchestrator | 2026-02-08 05:14:44.068781 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-08 05:14:44.068790 | orchestrator | Sunday 08 February 2026 05:14:06 +0000 (0:00:01.627) 0:00:01.627 ******* 2026-02-08 05:14:44.068800 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:14:44.068810 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:14:44.068817 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:14:44.068826 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:14:44.068836 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:14:44.068847 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:14:44.068856 | orchestrator | 2026-02-08 05:14:44.068866 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-08 05:14:44.068877 | orchestrator | Sunday 08 February 2026 05:14:10 +0000 (0:00:03.959) 0:00:05.586 ******* 2026-02-08 05:14:44.068890 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.068902 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.068917 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.068927 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.068946 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.068954 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.068965 | orchestrator | 2026-02-08 05:14:44.068988 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-08 05:14:44.068998 | orchestrator | Sunday 08 February 2026 05:14:12 +0000 (0:00:02.016) 0:00:07.603 ******* 2026-02-08 05:14:44.069007 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.069014 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.069023 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.069036 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.069063 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.069070 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.069091 | orchestrator | 2026-02-08 05:14:44.069098 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-08 05:14:44.069106 | orchestrator | Sunday 08 February 2026 05:14:14 +0000 (0:00:01.843) 0:00:09.447 ******* 2026-02-08 05:14:44.069113 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:14:44.069122 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:14:44.069130 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:14:44.069140 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:14:44.069147 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:14:44.069165 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:14:44.069220 | orchestrator | 2026-02-08 05:14:44.069230 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-08 05:14:44.069238 | orchestrator | Sunday 08 February 2026 05:14:17 +0000 (0:00:02.737) 0:00:12.185 ******* 2026-02-08 05:14:44.069252 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:14:44.069261 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:14:44.069271 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:14:44.069278 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:14:44.069307 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:14:44.069317 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:14:44.069325 | orchestrator | 2026-02-08 05:14:44.069334 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-08 05:14:44.069347 | orchestrator | Sunday 08 February 2026 05:14:19 +0000 (0:00:02.514) 0:00:14.699 ******* 2026-02-08 05:14:44.069355 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:14:44.069363 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:14:44.069373 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:14:44.069382 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:14:44.069389 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:14:44.069398 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:14:44.069408 | orchestrator | 2026-02-08 05:14:44.069419 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-08 05:14:44.069432 | orchestrator | Sunday 08 February 2026 05:14:21 +0000 (0:00:02.334) 0:00:17.033 ******* 2026-02-08 05:14:44.069441 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.069457 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.069515 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.069527 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.069535 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.069545 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.069558 | orchestrator | 2026-02-08 05:14:44.069569 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-08 05:14:44.069580 | orchestrator | Sunday 08 February 2026 05:14:24 +0000 (0:00:02.064) 0:00:19.098 ******* 2026-02-08 05:14:44.069586 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.069596 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.069607 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.069617 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.069634 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.069647 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.069659 | orchestrator | 2026-02-08 05:14:44.069666 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-08 05:14:44.069672 | orchestrator | Sunday 08 February 2026 05:14:26 +0000 (0:00:02.014) 0:00:21.112 ******* 2026-02-08 05:14:44.069678 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 05:14:44.069684 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 05:14:44.069692 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.069701 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 05:14:44.069710 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 05:14:44.069720 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.069729 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 05:14:44.069741 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 05:14:44.069750 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.069758 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 05:14:44.069771 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 05:14:44.069788 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.069813 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 05:14:44.069822 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 05:14:44.069857 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.069871 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-08 05:14:44.069884 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-08 05:14:44.069894 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.069904 | orchestrator | 2026-02-08 05:14:44.069922 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-08 05:14:44.069939 | orchestrator | Sunday 08 February 2026 05:14:28 +0000 (0:00:02.241) 0:00:23.353 ******* 2026-02-08 05:14:44.069954 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.069963 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.069970 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.069977 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.069984 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.069998 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.070008 | orchestrator | 2026-02-08 05:14:44.070069 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-08 05:14:44.070078 | orchestrator | Sunday 08 February 2026 05:14:30 +0000 (0:00:02.629) 0:00:25.982 ******* 2026-02-08 05:14:44.070151 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:14:44.070168 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:14:44.070183 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:14:44.070213 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:14:44.070224 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:14:44.070248 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:14:44.070257 | orchestrator | 2026-02-08 05:14:44.070275 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-08 05:14:44.070292 | orchestrator | Sunday 08 February 2026 05:14:32 +0000 (0:00:01.950) 0:00:27.933 ******* 2026-02-08 05:14:44.070302 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:14:44.070314 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:14:44.070335 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:14:44.070375 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:14:44.070387 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:14:44.070395 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:14:44.070404 | orchestrator | 2026-02-08 05:14:44.070412 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-08 05:14:44.070425 | orchestrator | Sunday 08 February 2026 05:14:35 +0000 (0:00:02.799) 0:00:30.733 ******* 2026-02-08 05:14:44.070446 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.070496 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.070513 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.070536 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.070554 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.070576 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.070586 | orchestrator | 2026-02-08 05:14:44.070611 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-08 05:14:44.070622 | orchestrator | Sunday 08 February 2026 05:14:37 +0000 (0:00:01.892) 0:00:32.625 ******* 2026-02-08 05:14:44.070643 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.070657 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.070674 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.070711 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.070733 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.070743 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.070758 | orchestrator | 2026-02-08 05:14:44.070775 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-08 05:14:44.070819 | orchestrator | Sunday 08 February 2026 05:14:39 +0000 (0:00:02.188) 0:00:34.813 ******* 2026-02-08 05:14:44.070836 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.070866 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.070892 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.070897 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.070906 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.070913 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.070931 | orchestrator | 2026-02-08 05:14:44.070987 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-08 05:14:44.071024 | orchestrator | Sunday 08 February 2026 05:14:41 +0000 (0:00:01.831) 0:00:36.645 ******* 2026-02-08 05:14:44.071056 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-08 05:14:44.071068 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-08 05:14:44.071112 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.071129 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-08 05:14:44.071141 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-08 05:14:44.071153 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.071166 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-08 05:14:44.071189 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-08 05:14:44.071196 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:14:44.071202 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-08 05:14:44.071208 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-08 05:14:44.071220 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:14:44.071254 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-08 05:14:44.071260 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-08 05:14:44.071265 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:14:44.071271 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-08 05:14:44.071277 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-08 05:14:44.071282 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:14:44.071289 | orchestrator | 2026-02-08 05:14:44.071294 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-08 05:14:44.071300 | orchestrator | Sunday 08 February 2026 05:14:43 +0000 (0:00:02.004) 0:00:38.649 ******* 2026-02-08 05:14:44.071307 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:14:44.071313 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:14:44.071325 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:16:37.241348 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:16:37.241480 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:16:37.241552 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:16:37.241573 | orchestrator | 2026-02-08 05:16:37.241592 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-08 05:16:37.241608 | orchestrator | Sunday 08 February 2026 05:14:45 +0000 (0:00:01.824) 0:00:40.474 ******* 2026-02-08 05:16:37.241623 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:16:37.241638 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:16:37.241653 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:16:37.241667 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:16:37.241681 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:16:37.241696 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:16:37.241707 | orchestrator | 2026-02-08 05:16:37.241717 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-08 05:16:37.241726 | orchestrator | 2026-02-08 05:16:37.241735 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-08 05:16:37.241745 | orchestrator | Sunday 08 February 2026 05:14:48 +0000 (0:00:02.686) 0:00:43.161 ******* 2026-02-08 05:16:37.241754 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.241783 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.241792 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.241800 | orchestrator | 2026-02-08 05:16:37.241809 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-08 05:16:37.241823 | orchestrator | Sunday 08 February 2026 05:14:49 +0000 (0:00:01.734) 0:00:44.895 ******* 2026-02-08 05:16:37.241832 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.241841 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.241849 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.241858 | orchestrator | 2026-02-08 05:16:37.241867 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-08 05:16:37.241876 | orchestrator | Sunday 08 February 2026 05:14:51 +0000 (0:00:02.046) 0:00:46.942 ******* 2026-02-08 05:16:37.241907 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:16:37.241916 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:16:37.241925 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:16:37.241933 | orchestrator | 2026-02-08 05:16:37.241942 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-08 05:16:37.241950 | orchestrator | Sunday 08 February 2026 05:14:54 +0000 (0:00:02.195) 0:00:49.137 ******* 2026-02-08 05:16:37.241959 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.241967 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.241976 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.241984 | orchestrator | 2026-02-08 05:16:37.241993 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-08 05:16:37.242001 | orchestrator | Sunday 08 February 2026 05:14:55 +0000 (0:00:01.903) 0:00:51.041 ******* 2026-02-08 05:16:37.242010 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:16:37.242073 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:16:37.242088 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:16:37.242103 | orchestrator | 2026-02-08 05:16:37.242117 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-08 05:16:37.242131 | orchestrator | Sunday 08 February 2026 05:14:57 +0000 (0:00:01.378) 0:00:52.420 ******* 2026-02-08 05:16:37.242147 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.242161 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.242175 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.242187 | orchestrator | 2026-02-08 05:16:37.242200 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-08 05:16:37.242215 | orchestrator | Sunday 08 February 2026 05:14:59 +0000 (0:00:01.711) 0:00:54.131 ******* 2026-02-08 05:16:37.242229 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.242244 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.242258 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.242272 | orchestrator | 2026-02-08 05:16:37.242287 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-08 05:16:37.242300 | orchestrator | Sunday 08 February 2026 05:15:01 +0000 (0:00:02.150) 0:00:56.281 ******* 2026-02-08 05:16:37.242316 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:16:37.242331 | orchestrator | 2026-02-08 05:16:37.242346 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-08 05:16:37.242361 | orchestrator | Sunday 08 February 2026 05:15:03 +0000 (0:00:02.075) 0:00:58.356 ******* 2026-02-08 05:16:37.242376 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.242391 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.242406 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.242421 | orchestrator | 2026-02-08 05:16:37.242437 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-08 05:16:37.242447 | orchestrator | Sunday 08 February 2026 05:15:05 +0000 (0:00:02.421) 0:01:00.778 ******* 2026-02-08 05:16:37.242456 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:16:37.242466 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.242481 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:16:37.242495 | orchestrator | 2026-02-08 05:16:37.242534 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-08 05:16:37.242549 | orchestrator | Sunday 08 February 2026 05:15:07 +0000 (0:00:01.632) 0:01:02.411 ******* 2026-02-08 05:16:37.242563 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:16:37.242577 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:16:37.242591 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:16:37.242604 | orchestrator | 2026-02-08 05:16:37.242617 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-08 05:16:37.242633 | orchestrator | Sunday 08 February 2026 05:15:09 +0000 (0:00:01.818) 0:01:04.229 ******* 2026-02-08 05:16:37.242648 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:16:37.242663 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:16:37.242678 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:16:37.242709 | orchestrator | 2026-02-08 05:16:37.242723 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-08 05:16:37.242733 | orchestrator | Sunday 08 February 2026 05:15:11 +0000 (0:00:02.453) 0:01:06.682 ******* 2026-02-08 05:16:37.242742 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:16:37.242751 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:16:37.242781 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:16:37.242790 | orchestrator | 2026-02-08 05:16:37.242799 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-08 05:16:37.242808 | orchestrator | Sunday 08 February 2026 05:15:13 +0000 (0:00:01.429) 0:01:08.111 ******* 2026-02-08 05:16:37.242817 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:16:37.242825 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:16:37.242834 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:16:37.242843 | orchestrator | 2026-02-08 05:16:37.242852 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-08 05:16:37.242860 | orchestrator | Sunday 08 February 2026 05:15:14 +0000 (0:00:01.588) 0:01:09.700 ******* 2026-02-08 05:16:37.242869 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:16:37.242878 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:16:37.242887 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:16:37.242895 | orchestrator | 2026-02-08 05:16:37.242904 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-08 05:16:37.242913 | orchestrator | Sunday 08 February 2026 05:15:16 +0000 (0:00:02.037) 0:01:11.737 ******* 2026-02-08 05:16:37.242922 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.242930 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.242939 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.242947 | orchestrator | 2026-02-08 05:16:37.242956 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-08 05:16:37.242965 | orchestrator | Sunday 08 February 2026 05:15:18 +0000 (0:00:01.924) 0:01:13.662 ******* 2026-02-08 05:16:37.242974 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.242983 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.242991 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.243000 | orchestrator | 2026-02-08 05:16:37.243008 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-08 05:16:37.243017 | orchestrator | Sunday 08 February 2026 05:15:20 +0000 (0:00:01.519) 0:01:15.182 ******* 2026-02-08 05:16:37.243027 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-08 05:16:37.243037 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-08 05:16:37.243046 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-08 05:16:37.243055 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-08 05:16:37.243064 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-08 05:16:37.243072 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-08 05:16:37.243081 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.243090 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.243098 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.243107 | orchestrator | 2026-02-08 05:16:37.243116 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-08 05:16:37.243125 | orchestrator | Sunday 08 February 2026 05:15:43 +0000 (0:00:23.363) 0:01:38.545 ******* 2026-02-08 05:16:37.243133 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:16:37.243142 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:16:37.243157 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:16:37.243166 | orchestrator | 2026-02-08 05:16:37.243175 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-08 05:16:37.243184 | orchestrator | Sunday 08 February 2026 05:15:44 +0000 (0:00:01.391) 0:01:39.936 ******* 2026-02-08 05:16:37.243193 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:16:37.243201 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:16:37.243210 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:16:37.243219 | orchestrator | 2026-02-08 05:16:37.243227 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-08 05:16:37.243236 | orchestrator | Sunday 08 February 2026 05:15:47 +0000 (0:00:02.934) 0:01:42.870 ******* 2026-02-08 05:16:37.243244 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.243253 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.243262 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.243270 | orchestrator | 2026-02-08 05:16:37.243279 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-08 05:16:37.243288 | orchestrator | Sunday 08 February 2026 05:15:50 +0000 (0:00:02.282) 0:01:45.153 ******* 2026-02-08 05:16:37.243296 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:16:37.243305 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:16:37.243314 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:16:37.243322 | orchestrator | 2026-02-08 05:16:37.243331 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-08 05:16:37.243340 | orchestrator | Sunday 08 February 2026 05:16:32 +0000 (0:00:42.159) 0:02:27.313 ******* 2026-02-08 05:16:37.243348 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.243357 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.243373 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.243382 | orchestrator | 2026-02-08 05:16:37.243391 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-08 05:16:37.243400 | orchestrator | Sunday 08 February 2026 05:16:33 +0000 (0:00:01.642) 0:02:28.956 ******* 2026-02-08 05:16:37.243408 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:16:37.243417 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:16:37.243426 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:16:37.243434 | orchestrator | 2026-02-08 05:16:37.243443 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-08 05:16:37.243452 | orchestrator | Sunday 08 February 2026 05:16:35 +0000 (0:00:01.684) 0:02:30.641 ******* 2026-02-08 05:16:37.243460 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:16:37.243469 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:16:37.243478 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:16:37.243487 | orchestrator | 2026-02-08 05:16:37.243526 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-08 05:17:25.937124 | orchestrator | Sunday 08 February 2026 05:16:37 +0000 (0:00:01.644) 0:02:32.286 ******* 2026-02-08 05:17:25.937262 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:17:25.937290 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:17:25.937309 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:17:25.937327 | orchestrator | 2026-02-08 05:17:25.937348 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-08 05:17:25.937368 | orchestrator | Sunday 08 February 2026 05:16:38 +0000 (0:00:01.652) 0:02:33.938 ******* 2026-02-08 05:17:25.937387 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:17:25.937406 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:17:25.937417 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:17:25.937428 | orchestrator | 2026-02-08 05:17:25.937440 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-08 05:17:25.937451 | orchestrator | Sunday 08 February 2026 05:16:40 +0000 (0:00:01.367) 0:02:35.305 ******* 2026-02-08 05:17:25.937463 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:17:25.937476 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:17:25.937486 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:17:25.937497 | orchestrator | 2026-02-08 05:17:25.937508 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-08 05:17:25.937576 | orchestrator | Sunday 08 February 2026 05:16:41 +0000 (0:00:01.741) 0:02:37.047 ******* 2026-02-08 05:17:25.937603 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:17:25.937615 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:17:25.937625 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:17:25.937636 | orchestrator | 2026-02-08 05:17:25.937647 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-08 05:17:25.937661 | orchestrator | Sunday 08 February 2026 05:16:43 +0000 (0:00:01.967) 0:02:39.015 ******* 2026-02-08 05:17:25.937673 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:17:25.937687 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:17:25.937700 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:17:25.937713 | orchestrator | 2026-02-08 05:17:25.937726 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-08 05:17:25.937739 | orchestrator | Sunday 08 February 2026 05:16:45 +0000 (0:00:01.800) 0:02:40.816 ******* 2026-02-08 05:17:25.937752 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:17:25.937764 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:17:25.937777 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:17:25.937789 | orchestrator | 2026-02-08 05:17:25.937801 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-08 05:17:25.937814 | orchestrator | Sunday 08 February 2026 05:16:47 +0000 (0:00:01.840) 0:02:42.656 ******* 2026-02-08 05:17:25.937827 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:17:25.937840 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:17:25.937853 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:17:25.937865 | orchestrator | 2026-02-08 05:17:25.937878 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-08 05:17:25.937890 | orchestrator | Sunday 08 February 2026 05:16:48 +0000 (0:00:01.315) 0:02:43.972 ******* 2026-02-08 05:17:25.937902 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:17:25.937915 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:17:25.937928 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:17:25.937940 | orchestrator | 2026-02-08 05:17:25.937953 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-08 05:17:25.937966 | orchestrator | Sunday 08 February 2026 05:16:50 +0000 (0:00:01.639) 0:02:45.612 ******* 2026-02-08 05:17:25.937978 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:17:25.937992 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:17:25.938005 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:17:25.938065 | orchestrator | 2026-02-08 05:17:25.938077 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-08 05:17:25.938088 | orchestrator | Sunday 08 February 2026 05:16:52 +0000 (0:00:01.656) 0:02:47.269 ******* 2026-02-08 05:17:25.938099 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:17:25.938110 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:17:25.938121 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:17:25.938131 | orchestrator | 2026-02-08 05:17:25.938143 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-08 05:17:25.938155 | orchestrator | Sunday 08 February 2026 05:16:53 +0000 (0:00:01.662) 0:02:48.931 ******* 2026-02-08 05:17:25.938166 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-08 05:17:25.938178 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-08 05:17:25.938188 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-08 05:17:25.938199 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-08 05:17:25.938210 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-08 05:17:25.938221 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-08 05:17:25.938241 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-08 05:17:25.938252 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-08 05:17:25.938263 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-08 05:17:25.938275 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-08 05:17:25.938285 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-08 05:17:25.938296 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-08 05:17:25.938327 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-08 05:17:25.938339 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-08 05:17:25.938350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-08 05:17:25.938361 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-08 05:17:25.938372 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-08 05:17:25.938383 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-08 05:17:25.938443 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-08 05:17:25.938457 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-08 05:17:25.938468 | orchestrator | 2026-02-08 05:17:25.938479 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-08 05:17:25.938490 | orchestrator | 2026-02-08 05:17:25.938501 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-08 05:17:25.938536 | orchestrator | Sunday 08 February 2026 05:16:58 +0000 (0:00:04.613) 0:02:53.545 ******* 2026-02-08 05:17:25.938547 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:17:25.938558 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:17:25.938569 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:17:25.938580 | orchestrator | 2026-02-08 05:17:25.938591 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-08 05:17:25.938601 | orchestrator | Sunday 08 February 2026 05:16:59 +0000 (0:00:01.511) 0:02:55.056 ******* 2026-02-08 05:17:25.938612 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:17:25.938623 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:17:25.938634 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:17:25.938644 | orchestrator | 2026-02-08 05:17:25.938655 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-08 05:17:25.938666 | orchestrator | Sunday 08 February 2026 05:17:01 +0000 (0:00:01.799) 0:02:56.856 ******* 2026-02-08 05:17:25.938677 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:17:25.938687 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:17:25.938698 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:17:25.938709 | orchestrator | 2026-02-08 05:17:25.938720 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-08 05:17:25.938730 | orchestrator | Sunday 08 February 2026 05:17:03 +0000 (0:00:01.619) 0:02:58.475 ******* 2026-02-08 05:17:25.938741 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:17:25.938752 | orchestrator | 2026-02-08 05:17:25.938763 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-08 05:17:25.938774 | orchestrator | Sunday 08 February 2026 05:17:05 +0000 (0:00:01.635) 0:03:00.111 ******* 2026-02-08 05:17:25.938785 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:17:25.938795 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:17:25.938806 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:17:25.938825 | orchestrator | 2026-02-08 05:17:25.938836 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-08 05:17:25.938847 | orchestrator | Sunday 08 February 2026 05:17:06 +0000 (0:00:01.374) 0:03:01.486 ******* 2026-02-08 05:17:25.938858 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:17:25.938868 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:17:25.938879 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:17:25.938890 | orchestrator | 2026-02-08 05:17:25.938901 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-08 05:17:25.938911 | orchestrator | Sunday 08 February 2026 05:17:08 +0000 (0:00:01.607) 0:03:03.094 ******* 2026-02-08 05:17:25.938922 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:17:25.938933 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:17:25.938944 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:17:25.938955 | orchestrator | 2026-02-08 05:17:25.938965 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-08 05:17:25.938976 | orchestrator | Sunday 08 February 2026 05:17:09 +0000 (0:00:01.378) 0:03:04.473 ******* 2026-02-08 05:17:25.938987 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:17:25.938998 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:17:25.939018 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:17:25.939029 | orchestrator | 2026-02-08 05:17:25.939040 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-08 05:17:25.939051 | orchestrator | Sunday 08 February 2026 05:17:11 +0000 (0:00:01.679) 0:03:06.152 ******* 2026-02-08 05:17:25.939062 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:17:25.939073 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:17:25.939083 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:17:25.939094 | orchestrator | 2026-02-08 05:17:25.939104 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-08 05:17:25.939115 | orchestrator | Sunday 08 February 2026 05:17:13 +0000 (0:00:02.106) 0:03:08.259 ******* 2026-02-08 05:17:25.939126 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:17:25.939137 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:17:25.939147 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:17:25.939158 | orchestrator | 2026-02-08 05:17:25.939169 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-08 05:17:25.939180 | orchestrator | Sunday 08 February 2026 05:17:15 +0000 (0:00:02.423) 0:03:10.683 ******* 2026-02-08 05:17:25.939190 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:17:25.939201 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:17:25.939212 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:17:25.939223 | orchestrator | 2026-02-08 05:17:25.939234 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-08 05:17:25.939245 | orchestrator | 2026-02-08 05:17:25.939255 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-08 05:17:25.939266 | orchestrator | Sunday 08 February 2026 05:17:23 +0000 (0:00:08.070) 0:03:18.753 ******* 2026-02-08 05:17:25.939277 | orchestrator | ok: [testbed-manager] 2026-02-08 05:17:25.939288 | orchestrator | 2026-02-08 05:17:25.939299 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-08 05:17:25.939326 | orchestrator | Sunday 08 February 2026 05:17:25 +0000 (0:00:02.236) 0:03:20.990 ******* 2026-02-08 05:18:36.287977 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288084 | orchestrator | 2026-02-08 05:18:36.288101 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-08 05:18:36.288122 | orchestrator | Sunday 08 February 2026 05:17:27 +0000 (0:00:01.464) 0:03:22.454 ******* 2026-02-08 05:18:36.288139 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-08 05:18:36.288154 | orchestrator | 2026-02-08 05:18:36.288169 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-08 05:18:36.288183 | orchestrator | Sunday 08 February 2026 05:17:29 +0000 (0:00:02.035) 0:03:24.490 ******* 2026-02-08 05:18:36.288199 | orchestrator | changed: [testbed-manager] 2026-02-08 05:18:36.288217 | orchestrator | 2026-02-08 05:18:36.288258 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-08 05:18:36.288271 | orchestrator | Sunday 08 February 2026 05:17:31 +0000 (0:00:02.006) 0:03:26.496 ******* 2026-02-08 05:18:36.288281 | orchestrator | changed: [testbed-manager] 2026-02-08 05:18:36.288291 | orchestrator | 2026-02-08 05:18:36.288300 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-08 05:18:36.288324 | orchestrator | Sunday 08 February 2026 05:17:33 +0000 (0:00:01.570) 0:03:28.067 ******* 2026-02-08 05:18:36.288334 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-08 05:18:36.288344 | orchestrator | 2026-02-08 05:18:36.288353 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-08 05:18:36.288363 | orchestrator | Sunday 08 February 2026 05:17:36 +0000 (0:00:03.025) 0:03:31.093 ******* 2026-02-08 05:18:36.288373 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-08 05:18:36.288383 | orchestrator | 2026-02-08 05:18:36.288392 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-08 05:18:36.288402 | orchestrator | Sunday 08 February 2026 05:17:37 +0000 (0:00:01.861) 0:03:32.955 ******* 2026-02-08 05:18:36.288412 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288422 | orchestrator | 2026-02-08 05:18:36.288432 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-08 05:18:36.288442 | orchestrator | Sunday 08 February 2026 05:17:39 +0000 (0:00:01.439) 0:03:34.394 ******* 2026-02-08 05:18:36.288451 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288461 | orchestrator | 2026-02-08 05:18:36.288471 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-08 05:18:36.288480 | orchestrator | 2026-02-08 05:18:36.288490 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-08 05:18:36.288500 | orchestrator | Sunday 08 February 2026 05:17:40 +0000 (0:00:01.633) 0:03:36.028 ******* 2026-02-08 05:18:36.288509 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288545 | orchestrator | 2026-02-08 05:18:36.288558 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-08 05:18:36.288590 | orchestrator | Sunday 08 February 2026 05:17:42 +0000 (0:00:01.137) 0:03:37.165 ******* 2026-02-08 05:18:36.288606 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 05:18:36.288622 | orchestrator | 2026-02-08 05:18:36.288638 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-08 05:18:36.288653 | orchestrator | Sunday 08 February 2026 05:17:43 +0000 (0:00:01.452) 0:03:38.619 ******* 2026-02-08 05:18:36.288686 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288703 | orchestrator | 2026-02-08 05:18:36.288721 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-08 05:18:36.288738 | orchestrator | Sunday 08 February 2026 05:17:45 +0000 (0:00:01.845) 0:03:40.464 ******* 2026-02-08 05:18:36.288754 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288768 | orchestrator | 2026-02-08 05:18:36.288778 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-08 05:18:36.288788 | orchestrator | Sunday 08 February 2026 05:17:48 +0000 (0:00:02.656) 0:03:43.121 ******* 2026-02-08 05:18:36.288797 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288807 | orchestrator | 2026-02-08 05:18:36.288817 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-08 05:18:36.288826 | orchestrator | Sunday 08 February 2026 05:17:49 +0000 (0:00:01.411) 0:03:44.533 ******* 2026-02-08 05:18:36.288836 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288846 | orchestrator | 2026-02-08 05:18:36.288855 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-08 05:18:36.288866 | orchestrator | Sunday 08 February 2026 05:17:50 +0000 (0:00:01.523) 0:03:46.056 ******* 2026-02-08 05:18:36.288875 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288885 | orchestrator | 2026-02-08 05:18:36.288895 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-08 05:18:36.288916 | orchestrator | Sunday 08 February 2026 05:17:52 +0000 (0:00:01.627) 0:03:47.684 ******* 2026-02-08 05:18:36.288926 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288935 | orchestrator | 2026-02-08 05:18:36.288945 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-08 05:18:36.288954 | orchestrator | Sunday 08 February 2026 05:17:55 +0000 (0:00:02.462) 0:03:50.146 ******* 2026-02-08 05:18:36.288964 | orchestrator | ok: [testbed-manager] 2026-02-08 05:18:36.288973 | orchestrator | 2026-02-08 05:18:36.288983 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-08 05:18:36.288993 | orchestrator | 2026-02-08 05:18:36.289002 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-08 05:18:36.289019 | orchestrator | Sunday 08 February 2026 05:17:56 +0000 (0:00:01.738) 0:03:51.885 ******* 2026-02-08 05:18:36.289036 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:18:36.289082 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:18:36.289097 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:18:36.289113 | orchestrator | 2026-02-08 05:18:36.289130 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-08 05:18:36.289147 | orchestrator | Sunday 08 February 2026 05:17:58 +0000 (0:00:01.343) 0:03:53.229 ******* 2026-02-08 05:18:36.289164 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:18:36.289175 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:18:36.289185 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:18:36.289194 | orchestrator | 2026-02-08 05:18:36.289223 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-08 05:18:36.289233 | orchestrator | Sunday 08 February 2026 05:17:59 +0000 (0:00:01.735) 0:03:54.964 ******* 2026-02-08 05:18:36.289243 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:18:36.289253 | orchestrator | 2026-02-08 05:18:36.289263 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-08 05:18:36.289272 | orchestrator | Sunday 08 February 2026 05:18:01 +0000 (0:00:01.861) 0:03:56.825 ******* 2026-02-08 05:18:36.289282 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-08 05:18:36.289292 | orchestrator | 2026-02-08 05:18:36.289301 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-08 05:18:36.289311 | orchestrator | Sunday 08 February 2026 05:18:03 +0000 (0:00:01.842) 0:03:58.668 ******* 2026-02-08 05:18:36.289321 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 05:18:36.289330 | orchestrator | 2026-02-08 05:18:36.289340 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-08 05:18:36.289349 | orchestrator | Sunday 08 February 2026 05:18:05 +0000 (0:00:01.907) 0:04:00.575 ******* 2026-02-08 05:18:36.289359 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:18:36.289368 | orchestrator | 2026-02-08 05:18:36.289378 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-08 05:18:36.289387 | orchestrator | Sunday 08 February 2026 05:18:06 +0000 (0:00:01.148) 0:04:01.724 ******* 2026-02-08 05:18:36.289397 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 05:18:36.289407 | orchestrator | 2026-02-08 05:18:36.289416 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-08 05:18:36.289426 | orchestrator | Sunday 08 February 2026 05:18:08 +0000 (0:00:02.082) 0:04:03.806 ******* 2026-02-08 05:18:36.289435 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 05:18:36.289445 | orchestrator | 2026-02-08 05:18:36.289454 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-08 05:18:36.289464 | orchestrator | Sunday 08 February 2026 05:18:11 +0000 (0:00:02.312) 0:04:06.119 ******* 2026-02-08 05:18:36.289473 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 05:18:36.289483 | orchestrator | 2026-02-08 05:18:36.289493 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-08 05:18:36.289502 | orchestrator | Sunday 08 February 2026 05:18:12 +0000 (0:00:01.188) 0:04:07.307 ******* 2026-02-08 05:18:36.289577 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 05:18:36.289589 | orchestrator | 2026-02-08 05:18:36.289599 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-08 05:18:36.289608 | orchestrator | Sunday 08 February 2026 05:18:13 +0000 (0:00:01.187) 0:04:08.494 ******* 2026-02-08 05:18:36.289618 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-08 05:18:36.289628 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-08 05:18:36.289639 | orchestrator | } 2026-02-08 05:18:36.289649 | orchestrator | 2026-02-08 05:18:36.289658 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-08 05:18:36.289668 | orchestrator | Sunday 08 February 2026 05:18:14 +0000 (0:00:01.173) 0:04:09.668 ******* 2026-02-08 05:18:36.289677 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:18:36.289687 | orchestrator | 2026-02-08 05:18:36.289697 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-08 05:18:36.289706 | orchestrator | Sunday 08 February 2026 05:18:15 +0000 (0:00:01.156) 0:04:10.825 ******* 2026-02-08 05:18:36.289716 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-08 05:18:36.289726 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-08 05:18:36.289735 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-08 05:18:36.289745 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-08 05:18:36.289754 | orchestrator | 2026-02-08 05:18:36.289764 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-08 05:18:36.289774 | orchestrator | Sunday 08 February 2026 05:18:21 +0000 (0:00:05.721) 0:04:16.547 ******* 2026-02-08 05:18:36.289783 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-08 05:18:36.289793 | orchestrator | 2026-02-08 05:18:36.289803 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-08 05:18:36.289812 | orchestrator | Sunday 08 February 2026 05:18:23 +0000 (0:00:02.449) 0:04:18.997 ******* 2026-02-08 05:18:36.289822 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-08 05:18:36.289831 | orchestrator | 2026-02-08 05:18:36.289841 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-08 05:18:36.289861 | orchestrator | Sunday 08 February 2026 05:18:26 +0000 (0:00:02.594) 0:04:21.591 ******* 2026-02-08 05:18:36.289871 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-08 05:18:36.289881 | orchestrator | 2026-02-08 05:18:36.289891 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-08 05:18:36.289900 | orchestrator | Sunday 08 February 2026 05:18:30 +0000 (0:00:04.280) 0:04:25.871 ******* 2026-02-08 05:18:36.289910 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:18:36.289920 | orchestrator | 2026-02-08 05:18:36.289929 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-08 05:18:36.289945 | orchestrator | Sunday 08 February 2026 05:18:31 +0000 (0:00:01.116) 0:04:26.988 ******* 2026-02-08 05:18:36.289962 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-08 05:18:36.289978 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-08 05:18:36.289994 | orchestrator | 2026-02-08 05:18:36.290010 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-08 05:18:36.290086 | orchestrator | Sunday 08 February 2026 05:18:34 +0000 (0:00:02.930) 0:04:29.919 ******* 2026-02-08 05:18:36.290105 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:18:36.290135 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:19:01.596804 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:19:01.596924 | orchestrator | 2026-02-08 05:19:01.596943 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-08 05:19:01.596956 | orchestrator | Sunday 08 February 2026 05:18:36 +0000 (0:00:01.419) 0:04:31.338 ******* 2026-02-08 05:19:01.596992 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:19:01.597005 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:19:01.597016 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:19:01.597027 | orchestrator | 2026-02-08 05:19:01.597038 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-08 05:19:01.597049 | orchestrator | 2026-02-08 05:19:01.597064 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-08 05:19:01.597083 | orchestrator | Sunday 08 February 2026 05:18:38 +0000 (0:00:02.127) 0:04:33.465 ******* 2026-02-08 05:19:01.597102 | orchestrator | ok: [testbed-manager] 2026-02-08 05:19:01.597122 | orchestrator | 2026-02-08 05:19:01.597143 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-08 05:19:01.597163 | orchestrator | Sunday 08 February 2026 05:18:39 +0000 (0:00:01.094) 0:04:34.560 ******* 2026-02-08 05:19:01.597230 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-08 05:19:01.597244 | orchestrator | 2026-02-08 05:19:01.597255 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-08 05:19:01.597266 | orchestrator | Sunday 08 February 2026 05:18:41 +0000 (0:00:01.512) 0:04:36.072 ******* 2026-02-08 05:19:01.597277 | orchestrator | ok: [testbed-manager] 2026-02-08 05:19:01.597288 | orchestrator | 2026-02-08 05:19:01.597299 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-08 05:19:01.597310 | orchestrator | 2026-02-08 05:19:01.597321 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-08 05:19:01.597332 | orchestrator | Sunday 08 February 2026 05:18:45 +0000 (0:00:04.702) 0:04:40.775 ******* 2026-02-08 05:19:01.597346 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:19:01.597359 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:19:01.597371 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:19:01.597384 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:19:01.597396 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:19:01.597408 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:19:01.597421 | orchestrator | 2026-02-08 05:19:01.597433 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-08 05:19:01.597446 | orchestrator | Sunday 08 February 2026 05:18:47 +0000 (0:00:01.911) 0:04:42.686 ******* 2026-02-08 05:19:01.597458 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-08 05:19:01.597471 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-08 05:19:01.597484 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-08 05:19:01.597497 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-08 05:19:01.597509 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-08 05:19:01.597553 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-08 05:19:01.597567 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-08 05:19:01.597580 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-08 05:19:01.597593 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-08 05:19:01.597606 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-08 05:19:01.597618 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-08 05:19:01.597631 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-08 05:19:01.597644 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-08 05:19:01.597656 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-08 05:19:01.597669 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-08 05:19:01.597692 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-08 05:19:01.597706 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-08 05:19:01.597719 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-08 05:19:01.597730 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-08 05:19:01.597741 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-08 05:19:01.597752 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-08 05:19:01.597763 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-08 05:19:01.597773 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-08 05:19:01.597784 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-08 05:19:01.597795 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-08 05:19:01.597806 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-08 05:19:01.597836 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-08 05:19:01.597848 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-08 05:19:01.597859 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-08 05:19:01.597870 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-08 05:19:01.597881 | orchestrator | 2026-02-08 05:19:01.597892 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-08 05:19:01.597902 | orchestrator | Sunday 08 February 2026 05:18:56 +0000 (0:00:09.036) 0:04:51.723 ******* 2026-02-08 05:19:01.597913 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:19:01.597925 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:19:01.597936 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:19:01.597947 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:19:01.597958 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:19:01.597969 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:19:01.597980 | orchestrator | 2026-02-08 05:19:01.597991 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-08 05:19:01.598002 | orchestrator | Sunday 08 February 2026 05:18:58 +0000 (0:00:01.849) 0:04:53.572 ******* 2026-02-08 05:19:01.598013 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:19:01.598084 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:19:01.598095 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:19:01.598140 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:19:01.598161 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:19:01.598181 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:19:01.598201 | orchestrator | 2026-02-08 05:19:01.598220 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:19:01.598239 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 05:19:01.598305 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-08 05:19:01.598318 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-08 05:19:01.598330 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-08 05:19:01.598341 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 05:19:01.598362 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 05:19:01.598373 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-08 05:19:01.598384 | orchestrator | 2026-02-08 05:19:01.598395 | orchestrator | 2026-02-08 05:19:01.598406 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:19:01.598417 | orchestrator | Sunday 08 February 2026 05:19:01 +0000 (0:00:03.058) 0:04:56.630 ******* 2026-02-08 05:19:01.598427 | orchestrator | =============================================================================== 2026-02-08 05:19:01.598438 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 42.16s 2026-02-08 05:19:01.598449 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.36s 2026-02-08 05:19:01.598461 | orchestrator | Manage labels ----------------------------------------------------------- 9.04s 2026-02-08 05:19:01.598472 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.07s 2026-02-08 05:19:01.598483 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.72s 2026-02-08 05:19:01.598493 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.70s 2026-02-08 05:19:01.598504 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.61s 2026-02-08 05:19:01.598515 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.28s 2026-02-08 05:19:01.598559 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 3.96s 2026-02-08 05:19:01.598571 | orchestrator | Manage taints ----------------------------------------------------------- 3.06s 2026-02-08 05:19:01.598582 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.03s 2026-02-08 05:19:01.598592 | orchestrator | k3s_server : Kill the temporary service used for initialization --------- 2.93s 2026-02-08 05:19:01.598603 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.93s 2026-02-08 05:19:01.598614 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.80s 2026-02-08 05:19:01.598625 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.74s 2026-02-08 05:19:01.598636 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.69s 2026-02-08 05:19:01.598647 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.66s 2026-02-08 05:19:01.598658 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.63s 2026-02-08 05:19:01.598679 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.59s 2026-02-08 05:19:02.073569 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.51s 2026-02-08 05:19:02.585004 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-08 05:19:02.585184 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-08 05:19:02.592933 | orchestrator | + set -e 2026-02-08 05:19:02.592998 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 05:19:02.593012 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 05:19:02.593025 | orchestrator | ++ INTERACTIVE=false 2026-02-08 05:19:02.593036 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 05:19:02.593047 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 05:19:02.593058 | orchestrator | + osism apply openstackclient 2026-02-08 05:19:14.729932 | orchestrator | 2026-02-08 05:19:14 | INFO  | Task 46a85201-5492-48f2-bef8-642e2a8f1cb2 (openstackclient) was prepared for execution. 2026-02-08 05:19:14.730116 | orchestrator | 2026-02-08 05:19:14 | INFO  | It takes a moment until task 46a85201-5492-48f2-bef8-642e2a8f1cb2 (openstackclient) has been started and output is visible here. 2026-02-08 05:19:47.917643 | orchestrator | 2026-02-08 05:19:47.917757 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-08 05:19:47.917769 | orchestrator | 2026-02-08 05:19:47.917777 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-08 05:19:47.917784 | orchestrator | Sunday 08 February 2026 05:19:21 +0000 (0:00:01.949) 0:00:01.949 ******* 2026-02-08 05:19:47.917792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-08 05:19:47.917800 | orchestrator | 2026-02-08 05:19:47.917806 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-08 05:19:47.917813 | orchestrator | Sunday 08 February 2026 05:19:22 +0000 (0:00:01.676) 0:00:03.625 ******* 2026-02-08 05:19:47.917820 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-08 05:19:47.917829 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-08 05:19:47.917835 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-08 05:19:47.917843 | orchestrator | 2026-02-08 05:19:47.917849 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-08 05:19:47.917856 | orchestrator | Sunday 08 February 2026 05:19:24 +0000 (0:00:01.866) 0:00:05.491 ******* 2026-02-08 05:19:47.917862 | orchestrator | changed: [testbed-manager] 2026-02-08 05:19:47.917869 | orchestrator | 2026-02-08 05:19:47.917875 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-08 05:19:47.917882 | orchestrator | Sunday 08 February 2026 05:19:26 +0000 (0:00:02.111) 0:00:07.603 ******* 2026-02-08 05:19:47.917888 | orchestrator | ok: [testbed-manager] 2026-02-08 05:19:47.917896 | orchestrator | 2026-02-08 05:19:47.917903 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-08 05:19:47.917910 | orchestrator | Sunday 08 February 2026 05:19:28 +0000 (0:00:02.060) 0:00:09.664 ******* 2026-02-08 05:19:47.917916 | orchestrator | ok: [testbed-manager] 2026-02-08 05:19:47.917923 | orchestrator | 2026-02-08 05:19:47.917930 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-08 05:19:47.917937 | orchestrator | Sunday 08 February 2026 05:19:30 +0000 (0:00:01.965) 0:00:11.629 ******* 2026-02-08 05:19:47.917943 | orchestrator | ok: [testbed-manager] 2026-02-08 05:19:47.917950 | orchestrator | 2026-02-08 05:19:47.917956 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-08 05:19:47.917962 | orchestrator | Sunday 08 February 2026 05:19:32 +0000 (0:00:01.445) 0:00:13.075 ******* 2026-02-08 05:19:47.917969 | orchestrator | changed: [testbed-manager] 2026-02-08 05:19:47.917975 | orchestrator | 2026-02-08 05:19:47.917981 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-08 05:19:47.917987 | orchestrator | Sunday 08 February 2026 05:19:42 +0000 (0:00:09.855) 0:00:22.931 ******* 2026-02-08 05:19:47.917993 | orchestrator | changed: [testbed-manager] 2026-02-08 05:19:47.918000 | orchestrator | 2026-02-08 05:19:47.918006 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-08 05:19:47.918012 | orchestrator | Sunday 08 February 2026 05:19:44 +0000 (0:00:01.965) 0:00:24.897 ******* 2026-02-08 05:19:47.918065 | orchestrator | changed: [testbed-manager] 2026-02-08 05:19:47.918073 | orchestrator | 2026-02-08 05:19:47.918079 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-08 05:19:47.918086 | orchestrator | Sunday 08 February 2026 05:19:45 +0000 (0:00:01.613) 0:00:26.510 ******* 2026-02-08 05:19:47.918093 | orchestrator | ok: [testbed-manager] 2026-02-08 05:19:47.918099 | orchestrator | 2026-02-08 05:19:47.918105 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:19:47.918112 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-08 05:19:47.918119 | orchestrator | 2026-02-08 05:19:47.918125 | orchestrator | 2026-02-08 05:19:47.918151 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:19:47.918159 | orchestrator | Sunday 08 February 2026 05:19:47 +0000 (0:00:01.878) 0:00:28.389 ******* 2026-02-08 05:19:47.918165 | orchestrator | =============================================================================== 2026-02-08 05:19:47.918172 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 9.86s 2026-02-08 05:19:47.918179 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.11s 2026-02-08 05:19:47.918185 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.06s 2026-02-08 05:19:47.918192 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.97s 2026-02-08 05:19:47.918197 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.97s 2026-02-08 05:19:47.918204 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.88s 2026-02-08 05:19:47.918210 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.87s 2026-02-08 05:19:47.918217 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.68s 2026-02-08 05:19:47.918223 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.61s 2026-02-08 05:19:47.918229 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.45s 2026-02-08 05:19:48.241169 | orchestrator | + osism apply -a upgrade common 2026-02-08 05:19:50.348259 | orchestrator | 2026-02-08 05:19:50 | INFO  | Task ad660914-95bf-4785-8826-6b9a6894ccf5 (common) was prepared for execution. 2026-02-08 05:19:50.348337 | orchestrator | 2026-02-08 05:19:50 | INFO  | It takes a moment until task ad660914-95bf-4785-8826-6b9a6894ccf5 (common) has been started and output is visible here. 2026-02-08 05:20:11.016683 | orchestrator | 2026-02-08 05:20:11.016803 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-08 05:20:11.016821 | orchestrator | 2026-02-08 05:20:11.016855 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-08 05:20:11.016875 | orchestrator | Sunday 08 February 2026 05:19:57 +0000 (0:00:02.631) 0:00:02.631 ******* 2026-02-08 05:20:11.016895 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:20:11.016915 | orchestrator | 2026-02-08 05:20:11.016934 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-08 05:20:11.017001 | orchestrator | Sunday 08 February 2026 05:20:01 +0000 (0:00:04.127) 0:00:06.759 ******* 2026-02-08 05:20:11.017016 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:20:11.017027 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:20:11.017051 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:20:11.017063 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:20:11.017075 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:20:11.017086 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:20:11.017097 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:20:11.017108 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:20:11.017119 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:20:11.017129 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:20:11.017140 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:20:11.017151 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:20:11.017162 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:20:11.017217 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:20:11.017238 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:20:11.017257 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:20:11.017275 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:20:11.017292 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:20:11.017313 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:20:11.017331 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:20:11.017343 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:20:11.017353 | orchestrator | 2026-02-08 05:20:11.017364 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-08 05:20:11.017375 | orchestrator | Sunday 08 February 2026 05:20:05 +0000 (0:00:04.002) 0:00:10.761 ******* 2026-02-08 05:20:11.017386 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:20:11.017399 | orchestrator | 2026-02-08 05:20:11.017410 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-08 05:20:11.017420 | orchestrator | Sunday 08 February 2026 05:20:08 +0000 (0:00:02.922) 0:00:13.684 ******* 2026-02-08 05:20:11.017437 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:11.017460 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:11.017504 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:11.017517 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:11.017559 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:11.017583 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:11.017595 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:11.017807 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:11.017825 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:11.017857 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817658 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817799 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817816 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817836 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817849 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817860 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817871 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817901 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817912 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817929 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817944 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:13.817954 | orchestrator | 2026-02-08 05:20:13.817966 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-08 05:20:13.817977 | orchestrator | Sunday 08 February 2026 05:20:12 +0000 (0:00:04.563) 0:00:18.248 ******* 2026-02-08 05:20:13.817989 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:13.818001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:13.818011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:13.818078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:13.818098 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:20:13.818118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:16.016341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:16.016357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:16.016451 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016474 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:20:16.016498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016586 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:20:16.016627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016640 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:20:16.016651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:16.016680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016692 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:20:16.016703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:16.016727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:16.016763 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:20:17.270598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:17.270693 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:20:17.270710 | orchestrator | 2026-02-08 05:20:17.270723 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-08 05:20:17.270780 | orchestrator | Sunday 08 February 2026 05:20:15 +0000 (0:00:03.097) 0:00:21.345 ******* 2026-02-08 05:20:17.270819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:17.270844 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:17.270864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:17.270884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:17.270903 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:20:17.270924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:17.270969 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:17.271060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:17.271074 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:17.271086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:17.271098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:17.271110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:17.271123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:17.271145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:17.271175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:31.855663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:31.855885 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:31.855910 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:20:31.855923 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:20:31.855934 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:20:31.855945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:31.855958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:31.855969 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:20:31.855980 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:20:31.855993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:20:31.856027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:31.856040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:31.856051 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:20:31.856063 | orchestrator | 2026-02-08 05:20:31.856075 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-08 05:20:31.856087 | orchestrator | Sunday 08 February 2026 05:20:19 +0000 (0:00:03.096) 0:00:24.442 ******* 2026-02-08 05:20:31.856099 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:20:31.856112 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:20:31.856125 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:20:31.856138 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:20:31.856169 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:20:31.856183 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:20:31.856196 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:20:31.856209 | orchestrator | 2026-02-08 05:20:31.856222 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-08 05:20:31.856235 | orchestrator | Sunday 08 February 2026 05:20:21 +0000 (0:00:02.183) 0:00:26.625 ******* 2026-02-08 05:20:31.856248 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:20:31.856261 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:20:31.856273 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:20:31.856285 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:20:31.856297 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:20:31.856310 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:20:31.856322 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:20:31.856334 | orchestrator | 2026-02-08 05:20:31.856347 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-08 05:20:31.856365 | orchestrator | Sunday 08 February 2026 05:20:23 +0000 (0:00:02.224) 0:00:28.850 ******* 2026-02-08 05:20:31.856378 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:20:31.856391 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:20:31.856404 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:20:31.856417 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:20:31.856429 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:20:31.856442 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:20:31.856454 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:20:31.856465 | orchestrator | 2026-02-08 05:20:31.856476 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-08 05:20:31.856487 | orchestrator | Sunday 08 February 2026 05:20:25 +0000 (0:00:02.280) 0:00:31.131 ******* 2026-02-08 05:20:31.856498 | orchestrator | changed: [testbed-manager] 2026-02-08 05:20:31.856516 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:20:31.856556 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:20:31.856569 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:20:31.856579 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:20:31.856590 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:20:31.856600 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:20:31.856611 | orchestrator | 2026-02-08 05:20:31.856622 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-08 05:20:31.856632 | orchestrator | Sunday 08 February 2026 05:20:28 +0000 (0:00:02.788) 0:00:33.919 ******* 2026-02-08 05:20:31.856644 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:31.856656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:31.856668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:31.856679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:31.856699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:33.805738 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.805896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.805924 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:33.805942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.805959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.805976 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:33.806082 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.806109 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.806144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.806164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.806185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.806205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.806224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.806243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.806277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:33.806311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:53.519009 | orchestrator | 2026-02-08 05:20:53.519096 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-08 05:20:53.519107 | orchestrator | Sunday 08 February 2026 05:20:33 +0000 (0:00:05.230) 0:00:39.150 ******* 2026-02-08 05:20:53.519114 | orchestrator | [WARNING]: Skipped 2026-02-08 05:20:53.519134 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-08 05:20:53.519142 | orchestrator | to this access issue: 2026-02-08 05:20:53.519149 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-08 05:20:53.519156 | orchestrator | directory 2026-02-08 05:20:53.519163 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 05:20:53.519170 | orchestrator | 2026-02-08 05:20:53.519177 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-08 05:20:53.519183 | orchestrator | Sunday 08 February 2026 05:20:36 +0000 (0:00:02.368) 0:00:41.519 ******* 2026-02-08 05:20:53.519190 | orchestrator | [WARNING]: Skipped 2026-02-08 05:20:53.519196 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-08 05:20:53.519202 | orchestrator | to this access issue: 2026-02-08 05:20:53.519209 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-08 05:20:53.519215 | orchestrator | directory 2026-02-08 05:20:53.519222 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 05:20:53.519228 | orchestrator | 2026-02-08 05:20:53.519234 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-08 05:20:53.519240 | orchestrator | Sunday 08 February 2026 05:20:37 +0000 (0:00:01.825) 0:00:43.345 ******* 2026-02-08 05:20:53.519247 | orchestrator | [WARNING]: Skipped 2026-02-08 05:20:53.519253 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-08 05:20:53.519259 | orchestrator | to this access issue: 2026-02-08 05:20:53.519266 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-08 05:20:53.519272 | orchestrator | directory 2026-02-08 05:20:53.519278 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 05:20:53.519285 | orchestrator | 2026-02-08 05:20:53.519292 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-08 05:20:53.519298 | orchestrator | Sunday 08 February 2026 05:20:39 +0000 (0:00:01.850) 0:00:45.195 ******* 2026-02-08 05:20:53.519305 | orchestrator | [WARNING]: Skipped 2026-02-08 05:20:53.519311 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-08 05:20:53.519317 | orchestrator | to this access issue: 2026-02-08 05:20:53.519324 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-08 05:20:53.519330 | orchestrator | directory 2026-02-08 05:20:53.519336 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 05:20:53.519342 | orchestrator | 2026-02-08 05:20:53.519349 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-08 05:20:53.519355 | orchestrator | Sunday 08 February 2026 05:20:41 +0000 (0:00:01.873) 0:00:47.069 ******* 2026-02-08 05:20:53.519361 | orchestrator | changed: [testbed-manager] 2026-02-08 05:20:53.519368 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:20:53.519374 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:20:53.519380 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:20:53.519386 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:20:53.519393 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:20:53.519399 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:20:53.519405 | orchestrator | 2026-02-08 05:20:53.519411 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-08 05:20:53.519418 | orchestrator | Sunday 08 February 2026 05:20:45 +0000 (0:00:04.127) 0:00:51.197 ******* 2026-02-08 05:20:53.519442 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:20:53.519449 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:20:53.519456 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:20:53.519462 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:20:53.519468 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:20:53.519474 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:20:53.519481 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:20:53.519487 | orchestrator | 2026-02-08 05:20:53.519493 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-08 05:20:53.519499 | orchestrator | Sunday 08 February 2026 05:20:49 +0000 (0:00:03.236) 0:00:54.433 ******* 2026-02-08 05:20:53.519506 | orchestrator | ok: [testbed-manager] 2026-02-08 05:20:53.519512 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:20:53.519518 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:20:53.519524 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:20:53.519582 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:20:53.519591 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:20:53.519598 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:20:53.519606 | orchestrator | 2026-02-08 05:20:53.519613 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-08 05:20:53.519620 | orchestrator | Sunday 08 February 2026 05:20:51 +0000 (0:00:02.725) 0:00:57.159 ******* 2026-02-08 05:20:53.519642 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:53.519659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:53.519668 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:20:53.519678 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:53.519691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:53.519699 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:20:53.519706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:20:53.519717 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:01.737802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:01.737910 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:01.737926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:01.737963 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:01.737987 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:01.738007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:01.738099 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:01.738156 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:01.738179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:01.738201 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:01.738223 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:01.738260 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:01.738282 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:01.738304 | orchestrator | 2026-02-08 05:21:01.738327 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-08 05:21:01.738350 | orchestrator | Sunday 08 February 2026 05:20:54 +0000 (0:00:02.948) 0:01:00.108 ******* 2026-02-08 05:21:01.738371 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:21:01.738392 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:21:01.738411 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:21:01.738432 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:21:01.738452 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:21:01.738472 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:21:01.738493 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:21:01.738513 | orchestrator | 2026-02-08 05:21:01.738564 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-08 05:21:01.738586 | orchestrator | Sunday 08 February 2026 05:20:58 +0000 (0:00:03.357) 0:01:03.466 ******* 2026-02-08 05:21:01.738608 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:21:01.738629 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:21:01.738648 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:21:01.738669 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:21:01.738689 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:21:01.738718 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:21:04.222519 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:21:04.222762 | orchestrator | 2026-02-08 05:21:04.222793 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-08 05:21:04.222815 | orchestrator | Sunday 08 February 2026 05:21:01 +0000 (0:00:03.610) 0:01:07.076 ******* 2026-02-08 05:21:04.222838 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:04.222931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:04.222957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:04.222978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:04.222998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:04.223019 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:04.223038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:04.223093 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:04.223130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:04.223150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:04.223169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:04.223189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:04.223209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:04.223242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:08.696641 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:08.696738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:08.696750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:08.696760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:08.696768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:08.696780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:08.696789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:08.696797 | orchestrator | 2026-02-08 05:21:08.696807 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-08 05:21:08.696816 | orchestrator | Sunday 08 February 2026 05:21:06 +0000 (0:00:04.405) 0:01:11.483 ******* 2026-02-08 05:21:08.696825 | orchestrator | changed: [testbed-manager] => { 2026-02-08 05:21:08.696834 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:21:08.696843 | orchestrator | } 2026-02-08 05:21:08.696851 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:21:08.696859 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:21:08.696867 | orchestrator | } 2026-02-08 05:21:08.696874 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:21:08.696882 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:21:08.696890 | orchestrator | } 2026-02-08 05:21:08.696898 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:21:08.696923 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:21:08.696932 | orchestrator | } 2026-02-08 05:21:08.696939 | orchestrator | changed: [testbed-node-3] => { 2026-02-08 05:21:08.696947 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:21:08.696955 | orchestrator | } 2026-02-08 05:21:08.696963 | orchestrator | changed: [testbed-node-4] => { 2026-02-08 05:21:08.696971 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:21:08.696978 | orchestrator | } 2026-02-08 05:21:08.696986 | orchestrator | changed: [testbed-node-5] => { 2026-02-08 05:21:08.696994 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:21:08.697002 | orchestrator | } 2026-02-08 05:21:08.697009 | orchestrator | 2026-02-08 05:21:08.697017 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:21:08.697026 | orchestrator | Sunday 08 February 2026 05:21:08 +0000 (0:00:02.078) 0:01:13.561 ******* 2026-02-08 05:21:08.697056 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:08.697067 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:08.697077 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:08.697085 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:21:08.697094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:08.697103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:08.697111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:08.697127 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:21:08.697137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:08.697164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.156759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.156850 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:21:15.156863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:15.156874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.156882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.156890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:15.156918 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:21:15.156926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.156934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.156942 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:21:15.156963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:15.156986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.156994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.157002 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:21:15.157010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:15.157018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.157031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:15.157039 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:21:15.157047 | orchestrator | 2026-02-08 05:21:15.157056 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:21:15.157064 | orchestrator | Sunday 08 February 2026 05:21:11 +0000 (0:00:03.024) 0:01:16.586 ******* 2026-02-08 05:21:15.157072 | orchestrator | 2026-02-08 05:21:15.157079 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:21:15.157087 | orchestrator | Sunday 08 February 2026 05:21:11 +0000 (0:00:00.451) 0:01:17.038 ******* 2026-02-08 05:21:15.157094 | orchestrator | 2026-02-08 05:21:15.157102 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:21:15.157109 | orchestrator | Sunday 08 February 2026 05:21:12 +0000 (0:00:00.468) 0:01:17.507 ******* 2026-02-08 05:21:15.157117 | orchestrator | 2026-02-08 05:21:15.157124 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:21:15.157131 | orchestrator | Sunday 08 February 2026 05:21:12 +0000 (0:00:00.433) 0:01:17.940 ******* 2026-02-08 05:21:15.157139 | orchestrator | 2026-02-08 05:21:15.157146 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:21:15.157153 | orchestrator | Sunday 08 February 2026 05:21:13 +0000 (0:00:00.433) 0:01:18.373 ******* 2026-02-08 05:21:15.157161 | orchestrator | 2026-02-08 05:21:15.157171 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:21:15.157179 | orchestrator | Sunday 08 February 2026 05:21:13 +0000 (0:00:00.752) 0:01:19.126 ******* 2026-02-08 05:21:15.157186 | orchestrator | 2026-02-08 05:21:15.157193 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:21:15.157201 | orchestrator | Sunday 08 February 2026 05:21:14 +0000 (0:00:00.483) 0:01:19.609 ******* 2026-02-08 05:21:15.157208 | orchestrator | 2026-02-08 05:21:15.157220 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-08 05:21:17.622071 | orchestrator | Sunday 08 February 2026 05:21:15 +0000 (0:00:00.876) 0:01:20.486 ******* 2026-02-08 05:21:17.622205 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_f3q28mm9/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_f3q28mm9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_f3q28mm9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-08 05:21:17.622302 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_y7l8pqh3/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_y7l8pqh3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_y7l8pqh3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-08 05:21:17.622330 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_t8zu79f_/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_t8zu79f_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_t8zu79f_/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-08 05:21:17.622383 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_d656mzf6/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_d656mzf6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_d656mzf6/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-08 05:21:21.103371 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_ixz0952z/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_ixz0952z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_ixz0952z/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-08 05:21:21.103596 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_x7n4bypm/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_x7n4bypm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_x7n4bypm/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-08 05:21:21.103633 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_970azsbs/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_970azsbs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_970azsbs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-08 05:21:21.103647 | orchestrator | 2026-02-08 05:21:21.103661 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:21:21.103675 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-08 05:21:21.103688 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-08 05:21:21.103699 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-08 05:21:21.103710 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-08 05:21:21.103721 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-08 05:21:21.103761 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-08 05:21:21.103788 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-08 05:21:21.103799 | orchestrator | 2026-02-08 05:21:21.103811 | orchestrator | 2026-02-08 05:21:21.103831 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:21:21.632908 | orchestrator | 2026-02-08 05:21:21 | INFO  | Task 5097d497-0132-429d-b893-549d5f5f3d20 (common) was prepared for execution. 2026-02-08 05:21:21.633007 | orchestrator | 2026-02-08 05:21:21 | INFO  | It takes a moment until task 5097d497-0132-429d-b893-549d5f5f3d20 (common) has been started and output is visible here. 2026-02-08 05:21:36.878200 | orchestrator | Sunday 08 February 2026 05:21:21 +0000 (0:00:05.958) 0:01:26.444 ******* 2026-02-08 05:21:36.878312 | orchestrator | =============================================================================== 2026-02-08 05:21:36.878328 | orchestrator | common : Restart fluentd container -------------------------------------- 5.96s 2026-02-08 05:21:36.878341 | orchestrator | common : Copying over config.json files for services -------------------- 5.23s 2026-02-08 05:21:36.878352 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.56s 2026-02-08 05:21:36.878363 | orchestrator | service-check-containers : common | Check containers -------------------- 4.41s 2026-02-08 05:21:36.878373 | orchestrator | common : include_tasks -------------------------------------------------- 4.13s 2026-02-08 05:21:36.878385 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.13s 2026-02-08 05:21:36.878395 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.00s 2026-02-08 05:21:36.878407 | orchestrator | common : Flush handlers ------------------------------------------------- 3.90s 2026-02-08 05:21:36.878418 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.61s 2026-02-08 05:21:36.878428 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.36s 2026-02-08 05:21:36.878439 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.24s 2026-02-08 05:21:36.878450 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.10s 2026-02-08 05:21:36.878462 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.10s 2026-02-08 05:21:36.878472 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.02s 2026-02-08 05:21:36.878501 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.95s 2026-02-08 05:21:36.878513 | orchestrator | common : include_tasks -------------------------------------------------- 2.92s 2026-02-08 05:21:36.878524 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.79s 2026-02-08 05:21:36.878588 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.73s 2026-02-08 05:21:36.878602 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.37s 2026-02-08 05:21:36.878613 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.28s 2026-02-08 05:21:36.878624 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-08 05:21:36.878635 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-08 05:21:36.878657 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-08 05:21:36.878668 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-08 05:21:36.878689 | orchestrator | 2026-02-08 05:21:36.878701 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-08 05:21:36.878711 | orchestrator | 2026-02-08 05:21:36.878727 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-08 05:21:36.878763 | orchestrator | Sunday 08 February 2026 05:21:27 +0000 (0:00:01.642) 0:00:01.642 ******* 2026-02-08 05:21:36.878775 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:21:36.878787 | orchestrator | 2026-02-08 05:21:36.878798 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-08 05:21:36.878809 | orchestrator | Sunday 08 February 2026 05:21:29 +0000 (0:00:02.154) 0:00:03.797 ******* 2026-02-08 05:21:36.878820 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:21:36.878830 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:21:36.878842 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:21:36.878853 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:21:36.878863 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:21:36.878874 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:21:36.878885 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:21:36.878895 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:21:36.878905 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:21:36.878917 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-08 05:21:36.878929 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:21:36.878940 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:21:36.878950 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:21:36.878961 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:21:36.878971 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:21:36.878999 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:21:36.879011 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-08 05:21:36.879021 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:21:36.879031 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:21:36.879043 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:21:36.879054 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-08 05:21:36.879065 | orchestrator | 2026-02-08 05:21:36.879075 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-08 05:21:36.879086 | orchestrator | Sunday 08 February 2026 05:21:32 +0000 (0:00:02.255) 0:00:06.052 ******* 2026-02-08 05:21:36.879096 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:21:36.879109 | orchestrator | 2026-02-08 05:21:36.879120 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-08 05:21:36.879131 | orchestrator | Sunday 08 February 2026 05:21:34 +0000 (0:00:02.227) 0:00:08.279 ******* 2026-02-08 05:21:36.879142 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:36.879165 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:36.879183 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:36.879195 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:36.879207 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:36.879235 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:38.446923 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:38.447030 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447070 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447112 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447124 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447155 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447168 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447181 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447204 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447215 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447232 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447243 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447255 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447266 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:38.447278 | orchestrator | 2026-02-08 05:21:38.447291 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-08 05:21:38.447304 | orchestrator | Sunday 08 February 2026 05:21:37 +0000 (0:00:03.382) 0:00:11.662 ******* 2026-02-08 05:21:38.447324 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:39.325116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:39.325248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:39.325280 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:39.325293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:39.325305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:39.325317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:39.325331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:39.325360 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:39.325382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:39.325394 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:21:39.325408 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:21:39.325419 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:21:39.325431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:39.325443 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:39.325454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:39.325466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:39.325477 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:21:39.325488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:39.325517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.482565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.482671 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:21:41.482690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:41.482758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.482772 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:21:41.482784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.482796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.482807 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:21:41.482818 | orchestrator | 2026-02-08 05:21:41.482831 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-08 05:21:41.482844 | orchestrator | Sunday 08 February 2026 05:21:39 +0000 (0:00:01.627) 0:00:13.289 ******* 2026-02-08 05:21:41.482855 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:41.482889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:41.482921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.482934 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.482946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:41.482957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.482969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.482981 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:41.483002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:41.483013 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:21:41.483026 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:21:41.483045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:47.257046 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:21:47.257179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:47.257214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:47.257245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:47.257258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:47.257291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:47.257306 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:21:47.257318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:21:47.257330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:47.257360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:47.257373 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:21:47.257384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:47.257395 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:21:47.257413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:47.257425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:21:47.257437 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:21:47.257448 | orchestrator | 2026-02-08 05:21:47.257470 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-08 05:21:47.257482 | orchestrator | Sunday 08 February 2026 05:21:41 +0000 (0:00:02.166) 0:00:15.456 ******* 2026-02-08 05:21:47.257493 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:21:47.257504 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:21:47.257515 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:21:47.257526 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:21:47.257563 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:21:47.257575 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:21:47.257588 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:21:47.257601 | orchestrator | 2026-02-08 05:21:47.257614 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-08 05:21:47.257626 | orchestrator | Sunday 08 February 2026 05:21:42 +0000 (0:00:01.053) 0:00:16.510 ******* 2026-02-08 05:21:47.257638 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:21:47.257651 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:21:47.257663 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:21:47.257676 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:21:47.257689 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:21:47.257701 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:21:47.257714 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:21:47.257727 | orchestrator | 2026-02-08 05:21:47.257739 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-08 05:21:47.257752 | orchestrator | Sunday 08 February 2026 05:21:43 +0000 (0:00:01.048) 0:00:17.558 ******* 2026-02-08 05:21:47.257764 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:21:47.257776 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:21:47.257790 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:21:47.257803 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:21:47.257815 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:21:47.257828 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:21:47.257841 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:21:47.257854 | orchestrator | 2026-02-08 05:21:47.257867 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-08 05:21:47.257881 | orchestrator | Sunday 08 February 2026 05:21:44 +0000 (0:00:00.797) 0:00:18.356 ******* 2026-02-08 05:21:47.257894 | orchestrator | ok: [testbed-manager] 2026-02-08 05:21:47.257908 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:21:47.257920 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:21:47.257930 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:21:47.257941 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:21:47.257951 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:21:47.257962 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:21:47.257973 | orchestrator | 2026-02-08 05:21:47.257983 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-08 05:21:47.257994 | orchestrator | Sunday 08 February 2026 05:21:46 +0000 (0:00:01.906) 0:00:20.263 ******* 2026-02-08 05:21:47.258069 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:49.056932 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:49.057077 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:49.057094 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:49.057106 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:49.057118 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:49.057131 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:49.057143 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:21:49.057172 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:49.057199 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:49.057211 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:49.057223 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:49.057235 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:49.057247 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:49.057260 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:21:49.057281 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.007784 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.007874 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.007885 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.007892 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.007899 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.007906 | orchestrator | 2026-02-08 05:22:02.007914 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-08 05:22:02.007923 | orchestrator | Sunday 08 February 2026 05:21:49 +0000 (0:00:03.645) 0:00:23.908 ******* 2026-02-08 05:22:02.007929 | orchestrator | [WARNING]: Skipped 2026-02-08 05:22:02.007937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-08 05:22:02.007944 | orchestrator | to this access issue: 2026-02-08 05:22:02.007951 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-08 05:22:02.007957 | orchestrator | directory 2026-02-08 05:22:02.007964 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 05:22:02.007971 | orchestrator | 2026-02-08 05:22:02.007978 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-08 05:22:02.007984 | orchestrator | Sunday 08 February 2026 05:21:51 +0000 (0:00:01.293) 0:00:25.202 ******* 2026-02-08 05:22:02.007990 | orchestrator | [WARNING]: Skipped 2026-02-08 05:22:02.007997 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-08 05:22:02.008003 | orchestrator | to this access issue: 2026-02-08 05:22:02.008009 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-08 05:22:02.008015 | orchestrator | directory 2026-02-08 05:22:02.008022 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 05:22:02.008028 | orchestrator | 2026-02-08 05:22:02.008052 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-08 05:22:02.008059 | orchestrator | Sunday 08 February 2026 05:21:52 +0000 (0:00:00.924) 0:00:26.127 ******* 2026-02-08 05:22:02.008065 | orchestrator | [WARNING]: Skipped 2026-02-08 05:22:02.008071 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-08 05:22:02.008078 | orchestrator | to this access issue: 2026-02-08 05:22:02.008084 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-08 05:22:02.008090 | orchestrator | directory 2026-02-08 05:22:02.008097 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 05:22:02.008103 | orchestrator | 2026-02-08 05:22:02.008109 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-08 05:22:02.008115 | orchestrator | Sunday 08 February 2026 05:21:53 +0000 (0:00:00.899) 0:00:27.026 ******* 2026-02-08 05:22:02.008121 | orchestrator | [WARNING]: Skipped 2026-02-08 05:22:02.008128 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-08 05:22:02.008134 | orchestrator | to this access issue: 2026-02-08 05:22:02.008140 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-08 05:22:02.008146 | orchestrator | directory 2026-02-08 05:22:02.008152 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-08 05:22:02.008159 | orchestrator | 2026-02-08 05:22:02.008215 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-08 05:22:02.008224 | orchestrator | Sunday 08 February 2026 05:21:53 +0000 (0:00:00.849) 0:00:27.876 ******* 2026-02-08 05:22:02.008231 | orchestrator | ok: [testbed-manager] 2026-02-08 05:22:02.008237 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:22:02.008243 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:22:02.008249 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:22:02.008256 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:22:02.008262 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:22:02.008268 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:22:02.008274 | orchestrator | 2026-02-08 05:22:02.008280 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-08 05:22:02.008287 | orchestrator | Sunday 08 February 2026 05:21:56 +0000 (0:00:02.903) 0:00:30.780 ******* 2026-02-08 05:22:02.008307 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:22:02.008317 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:22:02.008324 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:22:02.008330 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:22:02.008337 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:22:02.008345 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:22:02.008353 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-08 05:22:02.008360 | orchestrator | 2026-02-08 05:22:02.008368 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-08 05:22:02.008376 | orchestrator | Sunday 08 February 2026 05:21:58 +0000 (0:00:02.201) 0:00:32.982 ******* 2026-02-08 05:22:02.008384 | orchestrator | ok: [testbed-manager] 2026-02-08 05:22:02.008391 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:22:02.008399 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:22:02.008407 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:22:02.008414 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:22:02.008422 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:22:02.008429 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:22:02.008436 | orchestrator | 2026-02-08 05:22:02.008444 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-08 05:22:02.008457 | orchestrator | Sunday 08 February 2026 05:22:01 +0000 (0:00:02.080) 0:00:35.062 ******* 2026-02-08 05:22:02.008467 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:02.008477 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:02.008486 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.008496 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:02.008514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:02.851629 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:02.851723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:02.851751 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:02.851761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:02.851769 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:02.851777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:02.851787 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.851816 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:02.851826 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:02.851840 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.851850 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.851858 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:02.851868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:02.851876 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.851885 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:02.851910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:09.423374 | orchestrator | 2026-02-08 05:22:09.423491 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-08 05:22:09.423514 | orchestrator | Sunday 08 February 2026 05:22:02 +0000 (0:00:01.891) 0:00:36.954 ******* 2026-02-08 05:22:09.423524 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:22:09.423608 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:22:09.423620 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:22:09.423629 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:22:09.423638 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:22:09.423647 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:22:09.423656 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-08 05:22:09.423665 | orchestrator | 2026-02-08 05:22:09.423674 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-08 05:22:09.423683 | orchestrator | Sunday 08 February 2026 05:22:04 +0000 (0:00:01.998) 0:00:38.953 ******* 2026-02-08 05:22:09.423691 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:22:09.423700 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:22:09.423709 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:22:09.423718 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:22:09.423727 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:22:09.423735 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:22:09.423744 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-08 05:22:09.423753 | orchestrator | 2026-02-08 05:22:09.423762 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-08 05:22:09.423771 | orchestrator | Sunday 08 February 2026 05:22:07 +0000 (0:00:02.141) 0:00:41.094 ******* 2026-02-08 05:22:09.423782 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:09.423795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:09.423804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:09.423814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:09.423861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:09.423872 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:09.423882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-08 05:22:09.423891 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:09.423900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:09.423910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:09.423923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:09.423946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700655 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700683 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:22:11.700795 | orchestrator | 2026-02-08 05:22:11.700813 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-08 05:22:11.700829 | orchestrator | Sunday 08 February 2026 05:22:10 +0000 (0:00:03.142) 0:00:44.236 ******* 2026-02-08 05:22:11.700845 | orchestrator | changed: [testbed-manager] => { 2026-02-08 05:22:11.700862 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:22:11.700876 | orchestrator | } 2026-02-08 05:22:11.700890 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:22:11.700903 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:22:11.700917 | orchestrator | } 2026-02-08 05:22:11.700931 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:22:11.700945 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:22:11.700959 | orchestrator | } 2026-02-08 05:22:11.700972 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:22:11.700986 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:22:11.701000 | orchestrator | } 2026-02-08 05:22:11.701013 | orchestrator | changed: [testbed-node-3] => { 2026-02-08 05:22:11.701027 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:22:11.701042 | orchestrator | } 2026-02-08 05:22:11.701057 | orchestrator | changed: [testbed-node-4] => { 2026-02-08 05:22:11.701072 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:22:11.701087 | orchestrator | } 2026-02-08 05:22:11.701105 | orchestrator | changed: [testbed-node-5] => { 2026-02-08 05:22:11.701125 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:22:11.701145 | orchestrator | } 2026-02-08 05:22:11.701166 | orchestrator | 2026-02-08 05:22:11.701185 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:22:11.701204 | orchestrator | Sunday 08 February 2026 05:22:11 +0000 (0:00:01.065) 0:00:45.302 ******* 2026-02-08 05:22:11.701228 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:22:11.701271 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:11.701295 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:11.701317 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:22:11.701338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:22:11.701374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.287863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.287973 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:22:14.287992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:22:14.288006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.288042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.288054 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:22:14.288082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:22:14.288094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.288109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.288119 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:22:14.288149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:22:14.288161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.288169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.288191 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-08 05:22:14.288201 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-08 05:22:14.288219 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:22:14.288228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:22:14.288236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.288246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:22:14.288254 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:22:14.288267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-08 05:22:14.288285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:23:37.757163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:23:37.757269 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:23:37.757307 | orchestrator | 2026-02-08 05:23:37.757319 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:23:37.757329 | orchestrator | Sunday 08 February 2026 05:22:13 +0000 (0:00:02.102) 0:00:47.404 ******* 2026-02-08 05:23:37.757338 | orchestrator | 2026-02-08 05:23:37.757347 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:23:37.757356 | orchestrator | Sunday 08 February 2026 05:22:13 +0000 (0:00:00.096) 0:00:47.500 ******* 2026-02-08 05:23:37.757364 | orchestrator | 2026-02-08 05:23:37.757373 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:23:37.757382 | orchestrator | Sunday 08 February 2026 05:22:13 +0000 (0:00:00.086) 0:00:47.587 ******* 2026-02-08 05:23:37.757390 | orchestrator | 2026-02-08 05:23:37.757399 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:23:37.757407 | orchestrator | Sunday 08 February 2026 05:22:13 +0000 (0:00:00.073) 0:00:47.661 ******* 2026-02-08 05:23:37.757416 | orchestrator | 2026-02-08 05:23:37.757425 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:23:37.757434 | orchestrator | Sunday 08 February 2026 05:22:13 +0000 (0:00:00.070) 0:00:47.731 ******* 2026-02-08 05:23:37.757442 | orchestrator | 2026-02-08 05:23:37.757451 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:23:37.757460 | orchestrator | Sunday 08 February 2026 05:22:14 +0000 (0:00:00.346) 0:00:48.078 ******* 2026-02-08 05:23:37.757468 | orchestrator | 2026-02-08 05:23:37.757477 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-08 05:23:37.757486 | orchestrator | Sunday 08 February 2026 05:22:14 +0000 (0:00:00.073) 0:00:48.152 ******* 2026-02-08 05:23:37.757494 | orchestrator | 2026-02-08 05:23:37.757503 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-08 05:23:37.757511 | orchestrator | Sunday 08 February 2026 05:22:14 +0000 (0:00:00.103) 0:00:48.255 ******* 2026-02-08 05:23:37.757520 | orchestrator | changed: [testbed-manager] 2026-02-08 05:23:37.757529 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:23:37.757538 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:23:37.757546 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:23:37.757555 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:23:37.757564 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:23:37.757573 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:23:37.757582 | orchestrator | 2026-02-08 05:23:37.757590 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-08 05:23:37.757599 | orchestrator | Sunday 08 February 2026 05:22:48 +0000 (0:00:34.713) 0:01:22.969 ******* 2026-02-08 05:23:37.757608 | orchestrator | changed: [testbed-manager] 2026-02-08 05:23:37.757616 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:23:37.757625 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:23:37.757633 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:23:37.757642 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:23:37.757650 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:23:37.757685 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:23:37.757697 | orchestrator | 2026-02-08 05:23:37.757707 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-08 05:23:37.757718 | orchestrator | Sunday 08 February 2026 05:23:23 +0000 (0:00:34.839) 0:01:57.808 ******* 2026-02-08 05:23:37.757728 | orchestrator | ok: [testbed-manager] 2026-02-08 05:23:37.757739 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:23:37.757749 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:23:37.757759 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:23:37.757769 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:23:37.757780 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:23:37.757790 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:23:37.757800 | orchestrator | 2026-02-08 05:23:37.757810 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-08 05:23:37.757821 | orchestrator | Sunday 08 February 2026 05:23:25 +0000 (0:00:02.038) 0:01:59.847 ******* 2026-02-08 05:23:37.757892 | orchestrator | changed: [testbed-manager] 2026-02-08 05:23:37.757903 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:23:37.757914 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:23:37.757923 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:23:37.757934 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:23:37.757944 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:23:37.757953 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:23:37.757964 | orchestrator | 2026-02-08 05:23:37.757975 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:23:37.757985 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:23:37.757996 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:23:37.758005 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:23:37.758062 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:23:37.758089 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:23:37.758098 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:23:37.758107 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:23:37.758116 | orchestrator | 2026-02-08 05:23:37.758125 | orchestrator | 2026-02-08 05:23:37.758134 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:23:37.758143 | orchestrator | Sunday 08 February 2026 05:23:37 +0000 (0:00:11.324) 0:02:11.171 ******* 2026-02-08 05:23:37.758164 | orchestrator | =============================================================================== 2026-02-08 05:23:37.758173 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.84s 2026-02-08 05:23:37.758182 | orchestrator | common : Restart fluentd container ------------------------------------- 34.71s 2026-02-08 05:23:37.758190 | orchestrator | common : Restart cron container ---------------------------------------- 11.32s 2026-02-08 05:23:37.758199 | orchestrator | common : Copying over config.json files for services -------------------- 3.65s 2026-02-08 05:23:37.758208 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.38s 2026-02-08 05:23:37.758217 | orchestrator | service-check-containers : common | Check containers -------------------- 3.14s 2026-02-08 05:23:37.758225 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.90s 2026-02-08 05:23:37.758234 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.26s 2026-02-08 05:23:37.758243 | orchestrator | common : include_tasks -------------------------------------------------- 2.23s 2026-02-08 05:23:37.758251 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.20s 2026-02-08 05:23:37.758260 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.17s 2026-02-08 05:23:37.758269 | orchestrator | common : include_tasks -------------------------------------------------- 2.15s 2026-02-08 05:23:37.758277 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.14s 2026-02-08 05:23:37.758286 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.10s 2026-02-08 05:23:37.758295 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.08s 2026-02-08 05:23:37.758303 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.04s 2026-02-08 05:23:37.758312 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.00s 2026-02-08 05:23:37.758327 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.91s 2026-02-08 05:23:37.758336 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.89s 2026-02-08 05:23:37.758345 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.63s 2026-02-08 05:23:38.108356 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-08 05:23:40.218273 | orchestrator | 2026-02-08 05:23:40 | INFO  | Task ff6e2a58-d532-4061-8730-699da02b9933 (loadbalancer) was prepared for execution. 2026-02-08 05:23:40.218374 | orchestrator | 2026-02-08 05:23:40 | INFO  | It takes a moment until task ff6e2a58-d532-4061-8730-699da02b9933 (loadbalancer) has been started and output is visible here. 2026-02-08 05:24:01.273058 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-08 05:24:01.273135 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-08 05:24:01.273148 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-08 05:24:01.273152 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-08 05:24:01.273172 | orchestrator | 2026-02-08 05:24:01.273178 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 05:24:01.273182 | orchestrator | 2026-02-08 05:24:01.273186 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 05:24:01.273190 | orchestrator | Sunday 08 February 2026 05:23:45 +0000 (0:00:01.195) 0:00:01.195 ******* 2026-02-08 05:24:01.273194 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:01.273199 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:01.273203 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:01.273207 | orchestrator | 2026-02-08 05:24:01.273211 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 05:24:01.273215 | orchestrator | Sunday 08 February 2026 05:23:46 +0000 (0:00:00.769) 0:00:01.965 ******* 2026-02-08 05:24:01.273219 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-08 05:24:01.273223 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-08 05:24:01.273227 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-08 05:24:01.273231 | orchestrator | 2026-02-08 05:24:01.273235 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-08 05:24:01.273239 | orchestrator | 2026-02-08 05:24:01.273243 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-08 05:24:01.273247 | orchestrator | Sunday 08 February 2026 05:23:47 +0000 (0:00:00.948) 0:00:02.914 ******* 2026-02-08 05:24:01.273251 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:24:01.273255 | orchestrator | 2026-02-08 05:24:01.273259 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-08 05:24:01.273263 | orchestrator | Sunday 08 February 2026 05:23:48 +0000 (0:00:01.211) 0:00:04.125 ******* 2026-02-08 05:24:01.273267 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:01.273270 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:01.273274 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:01.273278 | orchestrator | 2026-02-08 05:24:01.273282 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-08 05:24:01.273286 | orchestrator | Sunday 08 February 2026 05:23:50 +0000 (0:00:01.287) 0:00:05.413 ******* 2026-02-08 05:24:01.273290 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:01.273294 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:01.273298 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:01.273301 | orchestrator | 2026-02-08 05:24:01.273305 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-08 05:24:01.273323 | orchestrator | Sunday 08 February 2026 05:23:51 +0000 (0:00:01.074) 0:00:06.488 ******* 2026-02-08 05:24:01.273328 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:01.273331 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:01.273335 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:01.273339 | orchestrator | 2026-02-08 05:24:01.273343 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-08 05:24:01.273347 | orchestrator | Sunday 08 February 2026 05:23:51 +0000 (0:00:00.656) 0:00:07.144 ******* 2026-02-08 05:24:01.273351 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:24:01.273355 | orchestrator | 2026-02-08 05:24:01.273359 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-08 05:24:01.273363 | orchestrator | Sunday 08 February 2026 05:23:53 +0000 (0:00:01.166) 0:00:08.310 ******* 2026-02-08 05:24:01.273366 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:01.273370 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:01.273374 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:01.273378 | orchestrator | 2026-02-08 05:24:01.273382 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-08 05:24:01.273386 | orchestrator | Sunday 08 February 2026 05:23:53 +0000 (0:00:00.695) 0:00:09.006 ******* 2026-02-08 05:24:01.273390 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-08 05:24:01.273393 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-08 05:24:01.273397 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-08 05:24:01.273401 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-08 05:24:01.273405 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-08 05:24:01.273409 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-08 05:24:01.273413 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-08 05:24:01.273418 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-08 05:24:01.273421 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-08 05:24:01.273425 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-08 05:24:01.273429 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-08 05:24:01.273442 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-08 05:24:01.273447 | orchestrator | 2026-02-08 05:24:01.273451 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-08 05:24:01.273454 | orchestrator | Sunday 08 February 2026 05:23:56 +0000 (0:00:02.510) 0:00:11.516 ******* 2026-02-08 05:24:01.273458 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-08 05:24:01.273462 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-08 05:24:01.273466 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-08 05:24:01.273470 | orchestrator | 2026-02-08 05:24:01.273474 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-08 05:24:01.273480 | orchestrator | Sunday 08 February 2026 05:23:57 +0000 (0:00:00.959) 0:00:12.476 ******* 2026-02-08 05:24:01.273484 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-08 05:24:01.273488 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-08 05:24:01.273492 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-08 05:24:01.273496 | orchestrator | 2026-02-08 05:24:01.273500 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-08 05:24:01.273504 | orchestrator | Sunday 08 February 2026 05:23:58 +0000 (0:00:01.177) 0:00:13.653 ******* 2026-02-08 05:24:01.273511 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-08 05:24:01.273515 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:01.273519 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-08 05:24:01.273523 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:01.273527 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-08 05:24:01.273530 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:01.273534 | orchestrator | 2026-02-08 05:24:01.273538 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-08 05:24:01.273542 | orchestrator | Sunday 08 February 2026 05:23:59 +0000 (0:00:01.230) 0:00:14.883 ******* 2026-02-08 05:24:01.273547 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:01.273554 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:01.273559 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:01.273563 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:01.273570 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:07.165667 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:07.165881 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:07.165911 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:07.165932 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:07.165950 | orchestrator | 2026-02-08 05:24:07.165972 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-08 05:24:07.165992 | orchestrator | Sunday 08 February 2026 05:24:01 +0000 (0:00:01.668) 0:00:16.552 ******* 2026-02-08 05:24:07.166009 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:07.166099 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:07.166116 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:07.166132 | orchestrator | 2026-02-08 05:24:07.166150 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-08 05:24:07.166168 | orchestrator | Sunday 08 February 2026 05:24:02 +0000 (0:00:00.986) 0:00:17.538 ******* 2026-02-08 05:24:07.166185 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-08 05:24:07.166204 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-08 05:24:07.166219 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-08 05:24:07.166233 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-08 05:24:07.166248 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-08 05:24:07.166265 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-08 05:24:07.166281 | orchestrator | 2026-02-08 05:24:07.166296 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-08 05:24:07.166312 | orchestrator | Sunday 08 February 2026 05:24:04 +0000 (0:00:01.809) 0:00:19.347 ******* 2026-02-08 05:24:07.166330 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:07.166348 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:07.166362 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:07.166376 | orchestrator | 2026-02-08 05:24:07.166391 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-08 05:24:07.166406 | orchestrator | Sunday 08 February 2026 05:24:05 +0000 (0:00:01.290) 0:00:20.638 ******* 2026-02-08 05:24:07.166448 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:07.166463 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:07.166479 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:07.166495 | orchestrator | 2026-02-08 05:24:07.166509 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-08 05:24:07.166525 | orchestrator | Sunday 08 February 2026 05:24:06 +0000 (0:00:01.175) 0:00:21.814 ******* 2026-02-08 05:24:07.166579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 05:24:07.166600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:07.166619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:07.166640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 05:24:07.166660 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:07.166680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 05:24:07.166725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:07.166775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:07.166827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 05:24:10.316629 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:10.316820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 05:24:10.316854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:10.316878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:10.316900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 05:24:10.316951 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:10.316971 | orchestrator | 2026-02-08 05:24:10.316992 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-08 05:24:10.317013 | orchestrator | Sunday 08 February 2026 05:24:07 +0000 (0:00:00.631) 0:00:22.445 ******* 2026-02-08 05:24:10.317033 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:10.317069 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:10.317082 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:10.317094 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:10.317105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:10.317144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 05:24:10.317158 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:10.317181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:10.317213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 05:24:15.774105 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:15.774222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:15.774242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1', '__omit_place_holder__11410df5967a425f00a675e5ebc1c4772d4632e1'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-08 05:24:15.774286 | orchestrator | 2026-02-08 05:24:15.774303 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-08 05:24:15.774318 | orchestrator | Sunday 08 February 2026 05:24:10 +0000 (0:00:03.150) 0:00:25.596 ******* 2026-02-08 05:24:15.774332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:15.774363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:15.774378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:15.774412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:15.774428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:15.774452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:15.774468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:15.774485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:15.774504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:15.774518 | orchestrator | 2026-02-08 05:24:15.774532 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-08 05:24:15.774545 | orchestrator | Sunday 08 February 2026 05:24:14 +0000 (0:00:03.800) 0:00:29.396 ******* 2026-02-08 05:24:15.774558 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-08 05:24:15.774573 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-08 05:24:15.774585 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-08 05:24:15.774598 | orchestrator | 2026-02-08 05:24:15.774612 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-08 05:24:15.774635 | orchestrator | Sunday 08 February 2026 05:24:15 +0000 (0:00:01.660) 0:00:31.057 ******* 2026-02-08 05:24:32.534316 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-08 05:24:32.534426 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-08 05:24:32.534442 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-08 05:24:32.534452 | orchestrator | 2026-02-08 05:24:32.534465 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-08 05:24:32.534475 | orchestrator | Sunday 08 February 2026 05:24:18 +0000 (0:00:03.188) 0:00:34.246 ******* 2026-02-08 05:24:32.534507 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:32.534519 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:32.534529 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:32.534538 | orchestrator | 2026-02-08 05:24:32.534549 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-08 05:24:32.534559 | orchestrator | Sunday 08 February 2026 05:24:20 +0000 (0:00:01.135) 0:00:35.382 ******* 2026-02-08 05:24:32.534569 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-08 05:24:32.534579 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-08 05:24:32.534589 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-08 05:24:32.534599 | orchestrator | 2026-02-08 05:24:32.534609 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-08 05:24:32.534619 | orchestrator | Sunday 08 February 2026 05:24:22 +0000 (0:00:01.932) 0:00:37.315 ******* 2026-02-08 05:24:32.534629 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-08 05:24:32.534639 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-08 05:24:32.534648 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-08 05:24:32.534658 | orchestrator | 2026-02-08 05:24:32.534668 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-08 05:24:32.534677 | orchestrator | Sunday 08 February 2026 05:24:23 +0000 (0:00:01.668) 0:00:38.983 ******* 2026-02-08 05:24:32.534687 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:24:32.534697 | orchestrator | 2026-02-08 05:24:32.534736 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-08 05:24:32.534754 | orchestrator | Sunday 08 February 2026 05:24:24 +0000 (0:00:01.223) 0:00:40.207 ******* 2026-02-08 05:24:32.534771 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-08 05:24:32.534787 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-08 05:24:32.534802 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-08 05:24:32.534818 | orchestrator | 2026-02-08 05:24:32.534834 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-08 05:24:32.534852 | orchestrator | Sunday 08 February 2026 05:24:26 +0000 (0:00:01.602) 0:00:41.809 ******* 2026-02-08 05:24:32.534869 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-08 05:24:32.534886 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-08 05:24:32.534903 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-08 05:24:32.534919 | orchestrator | 2026-02-08 05:24:32.534934 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-08 05:24:32.534951 | orchestrator | Sunday 08 February 2026 05:24:28 +0000 (0:00:01.653) 0:00:43.463 ******* 2026-02-08 05:24:32.534967 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:32.534984 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:32.535002 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:32.535019 | orchestrator | 2026-02-08 05:24:32.535036 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-08 05:24:32.535049 | orchestrator | Sunday 08 February 2026 05:24:28 +0000 (0:00:00.317) 0:00:43.781 ******* 2026-02-08 05:24:32.535060 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:32.535087 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:32.535098 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:32.535110 | orchestrator | 2026-02-08 05:24:32.535121 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-08 05:24:32.535143 | orchestrator | Sunday 08 February 2026 05:24:29 +0000 (0:00:00.916) 0:00:44.698 ******* 2026-02-08 05:24:32.535159 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:32.535194 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:32.535209 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:32.535221 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:32.535231 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:32.535246 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:32.535263 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:32.535280 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:34.531098 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:34.531193 | orchestrator | 2026-02-08 05:24:34.531207 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-08 05:24:34.531218 | orchestrator | Sunday 08 February 2026 05:24:32 +0000 (0:00:03.108) 0:00:47.806 ******* 2026-02-08 05:24:34.531229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 05:24:34.531240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:34.531257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:34.531273 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:34.531310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 05:24:34.531351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:34.531389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:34.531405 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:34.531421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 05:24:34.531436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:34.531450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:34.531465 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:34.531479 | orchestrator | 2026-02-08 05:24:34.531496 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-08 05:24:34.531515 | orchestrator | Sunday 08 February 2026 05:24:33 +0000 (0:00:00.637) 0:00:48.444 ******* 2026-02-08 05:24:34.531530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 05:24:34.531540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:34.531557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:41.932917 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:41.933030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 05:24:41.933052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:41.933066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:41.933078 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:41.933116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 05:24:41.933142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:41.933155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:41.933166 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:41.933178 | orchestrator | 2026-02-08 05:24:41.933190 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-08 05:24:41.933203 | orchestrator | Sunday 08 February 2026 05:24:34 +0000 (0:00:01.369) 0:00:49.813 ******* 2026-02-08 05:24:41.933214 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-08 05:24:41.933244 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-08 05:24:41.933258 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-08 05:24:41.933271 | orchestrator | 2026-02-08 05:24:41.933285 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-08 05:24:41.933299 | orchestrator | Sunday 08 February 2026 05:24:35 +0000 (0:00:01.472) 0:00:51.286 ******* 2026-02-08 05:24:41.933312 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-08 05:24:41.933325 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-08 05:24:41.933337 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-08 05:24:41.933350 | orchestrator | 2026-02-08 05:24:41.933363 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-08 05:24:41.933377 | orchestrator | Sunday 08 February 2026 05:24:37 +0000 (0:00:01.411) 0:00:52.697 ******* 2026-02-08 05:24:41.933389 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-08 05:24:41.933403 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-08 05:24:41.933416 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-08 05:24:41.933429 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:41.933442 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-08 05:24:41.933464 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-08 05:24:41.933476 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:41.933491 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-08 05:24:41.933503 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:41.933515 | orchestrator | 2026-02-08 05:24:41.933528 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-08 05:24:41.933540 | orchestrator | Sunday 08 February 2026 05:24:38 +0000 (0:00:01.517) 0:00:54.215 ******* 2026-02-08 05:24:41.933555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:41.933575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:41.933589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 05:24:41.933613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:43.571675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:43.571880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:24:43.571905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:43.571921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:43.571936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:24:43.571951 | orchestrator | 2026-02-08 05:24:43.571968 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-08 05:24:43.571998 | orchestrator | Sunday 08 February 2026 05:24:41 +0000 (0:00:03.002) 0:00:57.217 ******* 2026-02-08 05:24:43.572024 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:24:43.572038 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:24:43.572051 | orchestrator | } 2026-02-08 05:24:43.572064 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:24:43.572077 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:24:43.572090 | orchestrator | } 2026-02-08 05:24:43.572103 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:24:43.572115 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:24:43.572129 | orchestrator | } 2026-02-08 05:24:43.572142 | orchestrator | 2026-02-08 05:24:43.572156 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:24:43.572170 | orchestrator | Sunday 08 February 2026 05:24:42 +0000 (0:00:00.365) 0:00:57.583 ******* 2026-02-08 05:24:43.572204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 05:24:43.572228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:43.572260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:43.572275 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:43.572290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 05:24:43.572309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:43.572323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:43.572338 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:43.572353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 05:24:43.572387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:24:48.410491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:24:48.410641 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:48.410664 | orchestrator | 2026-02-08 05:24:48.410678 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-08 05:24:48.410692 | orchestrator | Sunday 08 February 2026 05:24:43 +0000 (0:00:01.265) 0:00:58.849 ******* 2026-02-08 05:24:48.410778 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:24:48.410794 | orchestrator | 2026-02-08 05:24:48.410806 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-08 05:24:48.410818 | orchestrator | Sunday 08 February 2026 05:24:44 +0000 (0:00:01.208) 0:01:00.057 ******* 2026-02-08 05:24:48.410833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:24:48.410865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 05:24:48.410878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:48.410917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 05:24:48.410953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:24:48.410968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 05:24:48.410981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:48.411000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 05:24:48.411015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:24:48.411046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 05:24:49.150370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:49.150526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 05:24:49.150553 | orchestrator | 2026-02-08 05:24:49.150573 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-08 05:24:49.150592 | orchestrator | Sunday 08 February 2026 05:24:48 +0000 (0:00:03.738) 0:01:03.795 ******* 2026-02-08 05:24:49.150635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:24:49.150662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 05:24:49.150767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:49.150819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 05:24:49.150839 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:49.150860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:24:49.150882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 05:24:49.150910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:49.150932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 05:24:49.150964 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:49.150984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:24:49.151020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-08 05:24:58.855597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:58.855755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-08 05:24:58.855773 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:58.855784 | orchestrator | 2026-02-08 05:24:58.855827 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-08 05:24:58.855838 | orchestrator | Sunday 08 February 2026 05:24:49 +0000 (0:00:00.730) 0:01:04.526 ******* 2026-02-08 05:24:58.855861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:24:58.855892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:24:58.855903 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:58.855911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:24:58.855919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:24:58.855927 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:58.855936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:24:58.855945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:24:58.855953 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:24:58.855961 | orchestrator | 2026-02-08 05:24:58.855969 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-08 05:24:58.855977 | orchestrator | Sunday 08 February 2026 05:24:50 +0000 (0:00:01.527) 0:01:06.054 ******* 2026-02-08 05:24:58.855985 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:58.855994 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:58.856002 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:58.856010 | orchestrator | 2026-02-08 05:24:58.856018 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-08 05:24:58.856026 | orchestrator | Sunday 08 February 2026 05:24:52 +0000 (0:00:01.285) 0:01:07.340 ******* 2026-02-08 05:24:58.856034 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:24:58.856041 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:24:58.856049 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:24:58.856057 | orchestrator | 2026-02-08 05:24:58.856065 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-08 05:24:58.856073 | orchestrator | Sunday 08 February 2026 05:24:54 +0000 (0:00:02.075) 0:01:09.415 ******* 2026-02-08 05:24:58.856081 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:24:58.856089 | orchestrator | 2026-02-08 05:24:58.856096 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-08 05:24:58.856104 | orchestrator | Sunday 08 February 2026 05:24:55 +0000 (0:00:00.901) 0:01:10.316 ******* 2026-02-08 05:24:58.856131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:24:58.856163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:58.856174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:24:58.856185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:24:58.856196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:58.856214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:24:59.717789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:24:59.717932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:59.717951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:24:59.717964 | orchestrator | 2026-02-08 05:24:59.717977 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-08 05:24:59.717990 | orchestrator | Sunday 08 February 2026 05:24:58 +0000 (0:00:03.822) 0:01:14.139 ******* 2026-02-08 05:24:59.718003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:24:59.718115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:59.718142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:24:59.718154 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:24:59.718175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:24:59.718188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:24:59.718199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:24:59.718211 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:24:59.718231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:25:09.647373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-08 05:25:09.647506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:25:09.647524 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:09.647537 | orchestrator | 2026-02-08 05:25:09.647549 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-08 05:25:09.647598 | orchestrator | Sunday 08 February 2026 05:24:59 +0000 (0:00:00.859) 0:01:14.999 ******* 2026-02-08 05:25:09.647611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:09.647623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:09.647635 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:09.647645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:09.647656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:09.647666 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:09.647676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:09.647687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:09.647757 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:09.647770 | orchestrator | 2026-02-08 05:25:09.647780 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-08 05:25:09.647790 | orchestrator | Sunday 08 February 2026 05:25:00 +0000 (0:00:01.261) 0:01:16.261 ******* 2026-02-08 05:25:09.647800 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:25:09.647810 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:25:09.647819 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:25:09.647828 | orchestrator | 2026-02-08 05:25:09.647838 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-08 05:25:09.647848 | orchestrator | Sunday 08 February 2026 05:25:02 +0000 (0:00:01.260) 0:01:17.521 ******* 2026-02-08 05:25:09.647858 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:25:09.647867 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:25:09.647877 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:25:09.647886 | orchestrator | 2026-02-08 05:25:09.647895 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-08 05:25:09.647905 | orchestrator | Sunday 08 February 2026 05:25:04 +0000 (0:00:02.113) 0:01:19.635 ******* 2026-02-08 05:25:09.647914 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:09.647924 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:09.647934 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:09.647944 | orchestrator | 2026-02-08 05:25:09.647953 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-08 05:25:09.647977 | orchestrator | Sunday 08 February 2026 05:25:04 +0000 (0:00:00.329) 0:01:19.965 ******* 2026-02-08 05:25:09.647988 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:25:09.647997 | orchestrator | 2026-02-08 05:25:09.648007 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-08 05:25:09.648017 | orchestrator | Sunday 08 February 2026 05:25:05 +0000 (0:00:00.977) 0:01:20.942 ******* 2026-02-08 05:25:09.648029 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-08 05:25:09.648041 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-08 05:25:09.648052 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-08 05:25:09.648070 | orchestrator | 2026-02-08 05:25:09.648080 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-08 05:25:09.648090 | orchestrator | Sunday 08 February 2026 05:25:08 +0000 (0:00:02.553) 0:01:23.495 ******* 2026-02-08 05:25:09.648100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-08 05:25:09.648111 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:09.648135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-08 05:25:18.958565 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:18.958811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-08 05:25:18.958848 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:18.958872 | orchestrator | 2026-02-08 05:25:18.958893 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-08 05:25:18.958914 | orchestrator | Sunday 08 February 2026 05:25:09 +0000 (0:00:01.436) 0:01:24.932 ******* 2026-02-08 05:25:18.958936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 05:25:18.958990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 05:25:18.959013 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:18.959035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 05:25:18.959053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 05:25:18.959074 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:18.959092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 05:25:18.959112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-08 05:25:18.959133 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:18.959150 | orchestrator | 2026-02-08 05:25:18.959168 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-08 05:25:18.959186 | orchestrator | Sunday 08 February 2026 05:25:11 +0000 (0:00:01.824) 0:01:26.756 ******* 2026-02-08 05:25:18.959204 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:18.959221 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:18.959238 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:18.959256 | orchestrator | 2026-02-08 05:25:18.959277 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-08 05:25:18.959321 | orchestrator | Sunday 08 February 2026 05:25:12 +0000 (0:00:00.542) 0:01:27.299 ******* 2026-02-08 05:25:18.959340 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:18.959360 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:18.959378 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:18.959394 | orchestrator | 2026-02-08 05:25:18.959420 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-08 05:25:18.959440 | orchestrator | Sunday 08 February 2026 05:25:13 +0000 (0:00:01.381) 0:01:28.680 ******* 2026-02-08 05:25:18.959456 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:25:18.959473 | orchestrator | 2026-02-08 05:25:18.959489 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-08 05:25:18.959505 | orchestrator | Sunday 08 February 2026 05:25:14 +0000 (0:00:01.056) 0:01:29.736 ******* 2026-02-08 05:25:18.959541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:25:18.959562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:25:18.959582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 05:25:18.959600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 05:25:18.959635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:25:19.715286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:25:19.715407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 05:25:19.715434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 05:25:19.715452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:25:19.715484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:25:19.715534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 05:25:19.715547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 05:25:19.715558 | orchestrator | 2026-02-08 05:25:19.715570 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-08 05:25:19.715581 | orchestrator | Sunday 08 February 2026 05:25:19 +0000 (0:00:04.651) 0:01:34.388 ******* 2026-02-08 05:25:19.715593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:25:19.715604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:25:19.715619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 05:25:19.715645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 05:25:21.105539 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:21.105636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:25:21.105652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:25:21.105663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 05:25:21.105672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 05:25:21.105787 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:21.105832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:25:21.105843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:25:21.105852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-08 05:25:21.105860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-08 05:25:21.105869 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:21.105877 | orchestrator | 2026-02-08 05:25:21.105887 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-08 05:25:21.105896 | orchestrator | Sunday 08 February 2026 05:25:19 +0000 (0:00:00.723) 0:01:35.111 ******* 2026-02-08 05:25:21.105905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:21.105927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:21.105937 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:21.105945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:21.105957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:21.105966 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:21.105974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:21.105992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:30.081595 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:30.081757 | orchestrator | 2026-02-08 05:25:30.081781 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-08 05:25:30.081794 | orchestrator | Sunday 08 February 2026 05:25:21 +0000 (0:00:01.276) 0:01:36.388 ******* 2026-02-08 05:25:30.081805 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:25:30.081817 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:25:30.081828 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:25:30.081839 | orchestrator | 2026-02-08 05:25:30.081851 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-08 05:25:30.081862 | orchestrator | Sunday 08 February 2026 05:25:22 +0000 (0:00:01.262) 0:01:37.651 ******* 2026-02-08 05:25:30.081873 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:25:30.081884 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:25:30.081895 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:25:30.081906 | orchestrator | 2026-02-08 05:25:30.081917 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-08 05:25:30.081928 | orchestrator | Sunday 08 February 2026 05:25:24 +0000 (0:00:02.090) 0:01:39.741 ******* 2026-02-08 05:25:30.081939 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:30.081950 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:30.081961 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:30.081972 | orchestrator | 2026-02-08 05:25:30.081983 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-08 05:25:30.081994 | orchestrator | Sunday 08 February 2026 05:25:25 +0000 (0:00:00.561) 0:01:40.303 ******* 2026-02-08 05:25:30.082006 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:30.082083 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:30.082105 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:30.082123 | orchestrator | 2026-02-08 05:25:30.082141 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-08 05:25:30.082158 | orchestrator | Sunday 08 February 2026 05:25:25 +0000 (0:00:00.342) 0:01:40.645 ******* 2026-02-08 05:25:30.082176 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:25:30.082193 | orchestrator | 2026-02-08 05:25:30.082212 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-08 05:25:30.082230 | orchestrator | Sunday 08 February 2026 05:25:26 +0000 (0:00:00.805) 0:01:41.451 ******* 2026-02-08 05:25:30.082291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:25:30.082317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 05:25:30.082357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 05:25:30.082408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 05:25:30.082430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 05:25:30.082449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:25:30.082496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 05:25:30.082516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:25:30.082545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 05:25:30.082579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:25:31.261444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 05:25:31.261609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.261846 | orchestrator | 2026-02-08 05:25:31.261861 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-08 05:25:31.261874 | orchestrator | Sunday 08 February 2026 05:25:30 +0000 (0:00:04.303) 0:01:45.754 ******* 2026-02-08 05:25:31.261887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:25:31.261927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 05:25:31.443029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.443130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.443164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.443178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.443190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.443202 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:31.443257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:25:31.443273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 05:25:31.443285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.443297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 05:25:31.443309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:25:31.443328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-08 05:25:31.444100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 05:25:42.276141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-08 05:25:42.276288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:25:42.276318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-08 05:25:42.276341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 05:25:42.276363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-08 05:25:42.276421 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:42.276464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:25:42.276513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-08 05:25:42.276537 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:42.276558 | orchestrator | 2026-02-08 05:25:42.276580 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-08 05:25:42.276601 | orchestrator | Sunday 08 February 2026 05:25:31 +0000 (0:00:00.976) 0:01:46.731 ******* 2026-02-08 05:25:42.276623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:42.276648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:42.276673 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:42.276697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:42.276755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:42.276780 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:42.276803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:42.276824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:25:42.276847 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:42.276885 | orchestrator | 2026-02-08 05:25:42.276908 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-08 05:25:42.276931 | orchestrator | Sunday 08 February 2026 05:25:32 +0000 (0:00:01.281) 0:01:48.013 ******* 2026-02-08 05:25:42.276953 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:25:42.276977 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:25:42.276996 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:25:42.277015 | orchestrator | 2026-02-08 05:25:42.277033 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-08 05:25:42.277052 | orchestrator | Sunday 08 February 2026 05:25:33 +0000 (0:00:01.211) 0:01:49.224 ******* 2026-02-08 05:25:42.277070 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:25:42.277087 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:25:42.277106 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:25:42.277123 | orchestrator | 2026-02-08 05:25:42.277141 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-08 05:25:42.277159 | orchestrator | Sunday 08 February 2026 05:25:36 +0000 (0:00:02.177) 0:01:51.402 ******* 2026-02-08 05:25:42.277177 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:42.277196 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:42.277214 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:42.277232 | orchestrator | 2026-02-08 05:25:42.277251 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-08 05:25:42.277271 | orchestrator | Sunday 08 February 2026 05:25:36 +0000 (0:00:00.362) 0:01:51.764 ******* 2026-02-08 05:25:42.277290 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:25:42.277308 | orchestrator | 2026-02-08 05:25:42.277340 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-08 05:25:42.277360 | orchestrator | Sunday 08 February 2026 05:25:37 +0000 (0:00:01.089) 0:01:52.854 ******* 2026-02-08 05:25:42.277404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 05:25:42.395849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 05:25:42.395992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 05:25:42.396032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 05:25:42.396059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-08 05:25:42.396083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 05:25:45.879085 | orchestrator | 2026-02-08 05:25:45.879187 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-08 05:25:45.879205 | orchestrator | Sunday 08 February 2026 05:25:42 +0000 (0:00:04.829) 0:01:57.684 ******* 2026-02-08 05:25:45.879240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 05:25:45.879259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 05:25:45.879298 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:45.879337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 05:25:45.879352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 05:25:45.879373 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:45.879400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-08 05:25:58.328232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-08 05:25:58.328425 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:58.329429 | orchestrator | 2026-02-08 05:25:58.329504 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-08 05:25:58.329528 | orchestrator | Sunday 08 February 2026 05:25:45 +0000 (0:00:03.593) 0:02:01.277 ******* 2026-02-08 05:25:58.329540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 05:25:58.329563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 05:25:58.329573 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:58.329582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 05:25:58.329624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 05:25:58.329633 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:58.329641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 05:25:58.329649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-08 05:25:58.329673 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:58.329680 | orchestrator | 2026-02-08 05:25:58.329688 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-08 05:25:58.329696 | orchestrator | Sunday 08 February 2026 05:25:49 +0000 (0:00:03.950) 0:02:05.227 ******* 2026-02-08 05:25:58.329703 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:25:58.329712 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:25:58.329774 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:25:58.329845 | orchestrator | 2026-02-08 05:25:58.329857 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-08 05:25:58.329864 | orchestrator | Sunday 08 February 2026 05:25:51 +0000 (0:00:01.227) 0:02:06.455 ******* 2026-02-08 05:25:58.329872 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:25:58.329879 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:25:58.329887 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:25:58.329894 | orchestrator | 2026-02-08 05:25:58.329901 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-08 05:25:58.329909 | orchestrator | Sunday 08 February 2026 05:25:53 +0000 (0:00:02.113) 0:02:08.569 ******* 2026-02-08 05:25:58.329916 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:25:58.329982 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:25:58.329994 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:25:58.330001 | orchestrator | 2026-02-08 05:25:58.330010 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-08 05:25:58.330064 | orchestrator | Sunday 08 February 2026 05:25:53 +0000 (0:00:00.603) 0:02:09.173 ******* 2026-02-08 05:25:58.330072 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:25:58.330081 | orchestrator | 2026-02-08 05:25:58.330096 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-08 05:25:58.330104 | orchestrator | Sunday 08 February 2026 05:25:54 +0000 (0:00:00.900) 0:02:10.073 ******* 2026-02-08 05:25:58.330113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:25:58.330156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:26:09.039669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:26:09.039914 | orchestrator | 2026-02-08 05:26:09.040042 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-08 05:26:09.040065 | orchestrator | Sunday 08 February 2026 05:25:58 +0000 (0:00:03.535) 0:02:13.609 ******* 2026-02-08 05:26:09.040084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:26:09.040105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:26:09.040124 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:09.040143 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:09.040165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:26:09.040185 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:09.040201 | orchestrator | 2026-02-08 05:26:09.040219 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-08 05:26:09.040236 | orchestrator | Sunday 08 February 2026 05:25:59 +0000 (0:00:00.707) 0:02:14.317 ******* 2026-02-08 05:26:09.040274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:09.040297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:09.040333 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:09.040385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:09.040407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:09.040425 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:09.040444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:09.040465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:09.040484 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:09.040502 | orchestrator | 2026-02-08 05:26:09.040518 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-08 05:26:09.040535 | orchestrator | Sunday 08 February 2026 05:25:59 +0000 (0:00:00.792) 0:02:15.109 ******* 2026-02-08 05:26:09.040552 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:09.040571 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:09.040588 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:09.040605 | orchestrator | 2026-02-08 05:26:09.040621 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-08 05:26:09.040637 | orchestrator | Sunday 08 February 2026 05:26:01 +0000 (0:00:01.213) 0:02:16.322 ******* 2026-02-08 05:26:09.040654 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:09.040669 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:09.040685 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:09.040701 | orchestrator | 2026-02-08 05:26:09.040755 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-08 05:26:09.040776 | orchestrator | Sunday 08 February 2026 05:26:03 +0000 (0:00:02.578) 0:02:18.900 ******* 2026-02-08 05:26:09.040792 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:09.040809 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:09.040826 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:09.040843 | orchestrator | 2026-02-08 05:26:09.040860 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-08 05:26:09.040876 | orchestrator | Sunday 08 February 2026 05:26:03 +0000 (0:00:00.357) 0:02:19.258 ******* 2026-02-08 05:26:09.040893 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:26:09.040910 | orchestrator | 2026-02-08 05:26:09.040927 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-08 05:26:09.040942 | orchestrator | Sunday 08 February 2026 05:26:04 +0000 (0:00:00.994) 0:02:20.253 ******* 2026-02-08 05:26:09.040991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 05:26:09.732220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 05:26:09.732360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-08 05:26:09.732401 | orchestrator | 2026-02-08 05:26:09.732417 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-08 05:26:09.732429 | orchestrator | Sunday 08 February 2026 05:26:09 +0000 (0:00:04.051) 0:02:24.304 ******* 2026-02-08 05:26:09.732443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 05:26:09.732467 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:09.732498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 05:26:15.028651 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:15.028783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-08 05:26:15.028826 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:15.028838 | orchestrator | 2026-02-08 05:26:15.028849 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-08 05:26:15.028859 | orchestrator | Sunday 08 February 2026 05:26:09 +0000 (0:00:00.716) 0:02:25.021 ******* 2026-02-08 05:26:15.028869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-08 05:26:15.028882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 05:26:15.028893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-08 05:26:15.028904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 05:26:15.028965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-08 05:26:15.028977 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:15.028996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-08 05:26:15.029002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 05:26:15.029007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-08 05:26:15.029019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 05:26:15.029025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-08 05:26:15.029030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-08 05:26:15.029035 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:15.029041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 05:26:15.029049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-08 05:26:15.029055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-08 05:26:15.029060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-08 05:26:15.029065 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:15.029071 | orchestrator | 2026-02-08 05:26:15.029076 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-08 05:26:15.029082 | orchestrator | Sunday 08 February 2026 05:26:11 +0000 (0:00:01.323) 0:02:26.344 ******* 2026-02-08 05:26:15.029087 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:15.029093 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:15.029098 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:15.029103 | orchestrator | 2026-02-08 05:26:15.029108 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-08 05:26:15.029113 | orchestrator | Sunday 08 February 2026 05:26:12 +0000 (0:00:01.209) 0:02:27.554 ******* 2026-02-08 05:26:15.029118 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:15.029123 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:15.029128 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:15.029133 | orchestrator | 2026-02-08 05:26:15.029138 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-08 05:26:15.029143 | orchestrator | Sunday 08 February 2026 05:26:14 +0000 (0:00:02.186) 0:02:29.741 ******* 2026-02-08 05:26:15.029148 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:15.029153 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:15.029158 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:15.029164 | orchestrator | 2026-02-08 05:26:15.029169 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-08 05:26:15.029174 | orchestrator | Sunday 08 February 2026 05:26:14 +0000 (0:00:00.361) 0:02:30.103 ******* 2026-02-08 05:26:15.029183 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:21.408654 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:21.408835 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:21.408854 | orchestrator | 2026-02-08 05:26:21.408865 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-08 05:26:21.408876 | orchestrator | Sunday 08 February 2026 05:26:15 +0000 (0:00:00.347) 0:02:30.451 ******* 2026-02-08 05:26:21.408890 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:26:21.408904 | orchestrator | 2026-02-08 05:26:21.408919 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-08 05:26:21.408935 | orchestrator | Sunday 08 February 2026 05:26:16 +0000 (0:00:01.339) 0:02:31.790 ******* 2026-02-08 05:26:21.408962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-08 05:26:21.408982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 05:26:21.409017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 05:26:21.409033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-08 05:26:21.409082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-08 05:26:21.409103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 05:26:21.409121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 05:26:21.409147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 05:26:21.409167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 05:26:21.409186 | orchestrator | 2026-02-08 05:26:21.409204 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-08 05:26:21.409237 | orchestrator | Sunday 08 February 2026 05:26:20 +0000 (0:00:03.684) 0:02:35.475 ******* 2026-02-08 05:26:21.409269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-08 05:26:22.317447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 05:26:22.317548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 05:26:22.317565 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:22.317598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-08 05:26:22.317614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 05:26:22.317647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 05:26:22.317658 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:22.317690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-08 05:26:22.317704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-08 05:26:22.317768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-08 05:26:22.317782 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:22.317794 | orchestrator | 2026-02-08 05:26:22.317807 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-08 05:26:22.317819 | orchestrator | Sunday 08 February 2026 05:26:21 +0000 (0:00:01.217) 0:02:36.693 ******* 2026-02-08 05:26:22.317832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-08 05:26:22.317854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-08 05:26:22.317867 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:22.317879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-08 05:26:22.317890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-08 05:26:22.317902 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:22.317913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-08 05:26:22.317924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-08 05:26:22.317935 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:22.317946 | orchestrator | 2026-02-08 05:26:22.317958 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-08 05:26:22.317976 | orchestrator | Sunday 08 February 2026 05:26:22 +0000 (0:00:00.906) 0:02:37.599 ******* 2026-02-08 05:26:32.280593 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:32.280794 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:32.280811 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:32.280819 | orchestrator | 2026-02-08 05:26:32.280827 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-08 05:26:32.280870 | orchestrator | Sunday 08 February 2026 05:26:23 +0000 (0:00:01.214) 0:02:38.814 ******* 2026-02-08 05:26:32.280879 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:32.280886 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:32.280893 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:32.280900 | orchestrator | 2026-02-08 05:26:32.280906 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-08 05:26:32.280913 | orchestrator | Sunday 08 February 2026 05:26:25 +0000 (0:00:02.265) 0:02:41.079 ******* 2026-02-08 05:26:32.280920 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:32.280928 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:32.280935 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:32.280942 | orchestrator | 2026-02-08 05:26:32.280948 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-08 05:26:32.280955 | orchestrator | Sunday 08 February 2026 05:26:26 +0000 (0:00:00.659) 0:02:41.739 ******* 2026-02-08 05:26:32.280961 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:26:32.280968 | orchestrator | 2026-02-08 05:26:32.280974 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-08 05:26:32.280981 | orchestrator | Sunday 08 February 2026 05:26:27 +0000 (0:00:01.038) 0:02:42.777 ******* 2026-02-08 05:26:32.281003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:26:32.281029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:26:32.281049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:26:32.281073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:26:32.281080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:26:32.281096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:26:32.281103 | orchestrator | 2026-02-08 05:26:32.281109 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-08 05:26:32.281116 | orchestrator | Sunday 08 February 2026 05:26:31 +0000 (0:00:04.098) 0:02:46.876 ******* 2026-02-08 05:26:32.281123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:26:32.281135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:26:41.931917 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:41.932048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:26:41.932115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:26:41.932130 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:41.932143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:26:41.932156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:26:41.932167 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:41.932178 | orchestrator | 2026-02-08 05:26:41.932191 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-08 05:26:41.932203 | orchestrator | Sunday 08 February 2026 05:26:32 +0000 (0:00:00.691) 0:02:47.567 ******* 2026-02-08 05:26:41.932231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:41.932247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:41.932260 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:41.932271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:41.932291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:41.932302 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:41.932314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:41.932325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:41.932336 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:41.932347 | orchestrator | 2026-02-08 05:26:41.932364 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-08 05:26:41.932377 | orchestrator | Sunday 08 February 2026 05:26:33 +0000 (0:00:00.951) 0:02:48.519 ******* 2026-02-08 05:26:41.932389 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:41.932402 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:41.932414 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:41.932427 | orchestrator | 2026-02-08 05:26:41.932440 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-08 05:26:41.932453 | orchestrator | Sunday 08 February 2026 05:26:34 +0000 (0:00:01.566) 0:02:50.085 ******* 2026-02-08 05:26:41.932466 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:41.932478 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:41.932490 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:41.932502 | orchestrator | 2026-02-08 05:26:41.932514 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-08 05:26:41.932527 | orchestrator | Sunday 08 February 2026 05:26:36 +0000 (0:00:02.146) 0:02:52.232 ******* 2026-02-08 05:26:41.932542 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:26:41.932562 | orchestrator | 2026-02-08 05:26:41.932578 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-08 05:26:41.932595 | orchestrator | Sunday 08 February 2026 05:26:38 +0000 (0:00:01.083) 0:02:53.316 ******* 2026-02-08 05:26:41.932614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:26:41.932635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:26:41.932681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 05:26:42.673298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 05:26:42.673441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:26:42.673459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:26:42.673494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 05:26:42.673508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 05:26:42.673561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:26:42.673579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:26:42.673591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 05:26:42.673603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 05:26:42.673616 | orchestrator | 2026-02-08 05:26:42.673629 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-08 05:26:42.673641 | orchestrator | Sunday 08 February 2026 05:26:42 +0000 (0:00:04.004) 0:02:57.321 ******* 2026-02-08 05:26:42.673655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:26:42.673680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:26:43.978345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 05:26:43.978462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 05:26:43.978481 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:43.978497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:26:43.978511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:26:43.978547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 05:26:43.978599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 05:26:43.978612 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:43.978630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:26:43.978643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:26:43.978656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-08 05:26:43.978676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-08 05:26:43.978687 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:43.978699 | orchestrator | 2026-02-08 05:26:43.978712 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-08 05:26:43.978724 | orchestrator | Sunday 08 February 2026 05:26:42 +0000 (0:00:00.736) 0:02:58.057 ******* 2026-02-08 05:26:43.978800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:43.978816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:43.978829 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:43.978840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:43.978861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:55.223329 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:55.223441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:55.223461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:26:55.223475 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:55.223487 | orchestrator | 2026-02-08 05:26:55.223500 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-08 05:26:55.223529 | orchestrator | Sunday 08 February 2026 05:26:43 +0000 (0:00:01.203) 0:02:59.261 ******* 2026-02-08 05:26:55.223541 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:55.223553 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:55.223564 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:55.223575 | orchestrator | 2026-02-08 05:26:55.223586 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-08 05:26:55.223597 | orchestrator | Sunday 08 February 2026 05:26:45 +0000 (0:00:01.239) 0:03:00.501 ******* 2026-02-08 05:26:55.223608 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:26:55.223619 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:26:55.223630 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:26:55.223641 | orchestrator | 2026-02-08 05:26:55.223652 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-08 05:26:55.223663 | orchestrator | Sunday 08 February 2026 05:26:47 +0000 (0:00:02.197) 0:03:02.699 ******* 2026-02-08 05:26:55.223674 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:26:55.223685 | orchestrator | 2026-02-08 05:26:55.223696 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-08 05:26:55.223727 | orchestrator | Sunday 08 February 2026 05:26:48 +0000 (0:00:01.551) 0:03:04.250 ******* 2026-02-08 05:26:55.223790 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:26:55.223803 | orchestrator | 2026-02-08 05:26:55.223814 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-08 05:26:55.223825 | orchestrator | Sunday 08 February 2026 05:26:52 +0000 (0:00:03.702) 0:03:07.953 ******* 2026-02-08 05:26:55.223840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:26:55.223875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 05:26:55.223891 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:55.223912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:26:55.223936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 05:26:55.223948 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:55.223969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:26:57.804360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 05:26:57.804468 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:26:57.804479 | orchestrator | 2026-02-08 05:26:57.804487 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-08 05:26:57.804495 | orchestrator | Sunday 08 February 2026 05:26:55 +0000 (0:00:02.545) 0:03:10.498 ******* 2026-02-08 05:26:57.804536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:26:57.804545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 05:26:57.804552 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:26:57.804576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:26:57.804590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 05:26:57.804596 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:26:57.804603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:26:57.804614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-08 05:27:08.050534 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:08.050693 | orchestrator | 2026-02-08 05:27:08.050717 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-08 05:27:08.050819 | orchestrator | Sunday 08 February 2026 05:26:57 +0000 (0:00:02.587) 0:03:13.085 ******* 2026-02-08 05:27:08.050845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 05:27:08.050870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 05:27:08.050889 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:08.050907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 05:27:08.050925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 05:27:08.050944 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:08.050962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 05:27:08.050980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-08 05:27:08.051009 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:08.051026 | orchestrator | 2026-02-08 05:27:08.051045 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-08 05:27:08.051065 | orchestrator | Sunday 08 February 2026 05:27:01 +0000 (0:00:03.314) 0:03:16.400 ******* 2026-02-08 05:27:08.051084 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:27:08.051127 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:27:08.051146 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:27:08.051165 | orchestrator | 2026-02-08 05:27:08.051184 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-08 05:27:08.051211 | orchestrator | Sunday 08 February 2026 05:27:02 +0000 (0:00:01.809) 0:03:18.209 ******* 2026-02-08 05:27:08.051231 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:08.051251 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:08.051270 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:08.051290 | orchestrator | 2026-02-08 05:27:08.051311 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-08 05:27:08.051330 | orchestrator | Sunday 08 February 2026 05:27:04 +0000 (0:00:01.651) 0:03:19.861 ******* 2026-02-08 05:27:08.051348 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:08.051363 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:08.051377 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:08.051392 | orchestrator | 2026-02-08 05:27:08.051406 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-08 05:27:08.051420 | orchestrator | Sunday 08 February 2026 05:27:04 +0000 (0:00:00.344) 0:03:20.205 ******* 2026-02-08 05:27:08.051434 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:27:08.051449 | orchestrator | 2026-02-08 05:27:08.051463 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-08 05:27:08.051477 | orchestrator | Sunday 08 February 2026 05:27:06 +0000 (0:00:01.495) 0:03:21.701 ******* 2026-02-08 05:27:08.051493 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-08 05:27:08.051509 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-08 05:27:08.051525 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-08 05:27:08.051549 | orchestrator | 2026-02-08 05:27:08.051563 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-08 05:27:08.051577 | orchestrator | Sunday 08 February 2026 05:27:07 +0000 (0:00:01.499) 0:03:23.201 ******* 2026-02-08 05:27:08.051598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-08 05:27:17.907384 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:17.907507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-08 05:27:17.907533 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:17.907552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-08 05:27:17.907571 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:17.907589 | orchestrator | 2026-02-08 05:27:17.907608 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-08 05:27:17.907627 | orchestrator | Sunday 08 February 2026 05:27:08 +0000 (0:00:00.448) 0:03:23.649 ******* 2026-02-08 05:27:17.907645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-08 05:27:17.907699 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:17.907716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-08 05:27:17.907783 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:17.907805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-08 05:27:17.907824 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:17.907842 | orchestrator | 2026-02-08 05:27:17.907862 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-08 05:27:17.907879 | orchestrator | Sunday 08 February 2026 05:27:09 +0000 (0:00:01.026) 0:03:24.675 ******* 2026-02-08 05:27:17.907896 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:17.907913 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:17.907930 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:17.907947 | orchestrator | 2026-02-08 05:27:17.907964 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-08 05:27:17.907981 | orchestrator | Sunday 08 February 2026 05:27:09 +0000 (0:00:00.498) 0:03:25.173 ******* 2026-02-08 05:27:17.907998 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:17.908015 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:17.908032 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:17.908049 | orchestrator | 2026-02-08 05:27:17.908067 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-08 05:27:17.908084 | orchestrator | Sunday 08 February 2026 05:27:11 +0000 (0:00:01.843) 0:03:27.017 ******* 2026-02-08 05:27:17.908100 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:17.908117 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:17.908134 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:17.908151 | orchestrator | 2026-02-08 05:27:17.908169 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-08 05:27:17.908186 | orchestrator | Sunday 08 February 2026 05:27:12 +0000 (0:00:00.640) 0:03:27.658 ******* 2026-02-08 05:27:17.908202 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:27:17.908219 | orchestrator | 2026-02-08 05:27:17.908236 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-08 05:27:17.908253 | orchestrator | Sunday 08 February 2026 05:27:13 +0000 (0:00:01.272) 0:03:28.930 ******* 2026-02-08 05:27:17.908304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:27:17.908326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:17.908357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-08 05:27:17.908378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-08 05:27:17.908413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:18.252410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:18.252537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:18.252606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 05:27:18.252628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:18.252649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:18.252683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-08 05:27:18.252724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:18.252742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:18.252859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 05:27:18.252880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:18.252899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:27:18.252937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:18.401484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-08 05:27:18.401637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-08 05:27:18.401662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:18.401677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:18.401704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:18.401864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 05:27:18.401906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:18.401920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:18.401932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-08 05:27:18.401944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:18.401963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:18.401987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:27:18.623687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 05:27:18.623836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:18.623854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:18.623892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-08 05:27:18.623946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-08 05:27:18.623960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:18.623972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:18.623985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:18.623997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 05:27:18.624015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:18.624041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:19.908510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-08 05:27:19.908613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:19.908630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:19.908663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 05:27:19.908678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:19.908711 | orchestrator | 2026-02-08 05:27:19.908725 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-08 05:27:19.908737 | orchestrator | Sunday 08 February 2026 05:27:18 +0000 (0:00:05.094) 0:03:34.025 ******* 2026-02-08 05:27:19.908835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:27:19.908851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:19.908864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-08 05:27:19.908883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-08 05:27:19.908910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:20.008279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:20.008385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:20.008404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 05:27:20.008418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:20.008449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:20.008506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:27:20.008522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:20.008535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-08 05:27:20.008548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-08 05:27:20.008573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:20.008585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-08 05:27:20.008606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:20.094660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:20.094866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 05:27:20.094929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:20.094944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:20.094956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:20.094969 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:20.095004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:27:20.095017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 05:27:20.095030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:20.095050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:20.095063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:20.095134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-08 05:27:20.318960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-08 05:27:20.319065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-08 05:27:20.319118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:20.319128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:20.319136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:20.319145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:20.319168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 05:27:20.319187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:20.319198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:20.319207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-08 05:27:20.319215 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:20.319224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:20.319238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:30.679193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-08 05:27:30.679336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-08 05:27:30.679370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-08 05:27:30.679387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-08 05:27:30.679402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-08 05:27:30.679414 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:30.679428 | orchestrator | 2026-02-08 05:27:30.679441 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-08 05:27:30.679454 | orchestrator | Sunday 08 February 2026 05:27:20 +0000 (0:00:01.576) 0:03:35.601 ******* 2026-02-08 05:27:30.679466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:30.679498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:30.679511 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:30.679523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:30.679546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:30.679557 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:30.679568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:30.679580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:30.679591 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:30.679602 | orchestrator | 2026-02-08 05:27:30.679613 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-08 05:27:30.679625 | orchestrator | Sunday 08 February 2026 05:27:21 +0000 (0:00:01.539) 0:03:37.140 ******* 2026-02-08 05:27:30.679637 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:27:30.679648 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:27:30.679659 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:27:30.679670 | orchestrator | 2026-02-08 05:27:30.679681 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-08 05:27:30.679697 | orchestrator | Sunday 08 February 2026 05:27:23 +0000 (0:00:01.535) 0:03:38.676 ******* 2026-02-08 05:27:30.679708 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:27:30.679719 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:27:30.679731 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:27:30.679744 | orchestrator | 2026-02-08 05:27:30.679788 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-08 05:27:30.679802 | orchestrator | Sunday 08 February 2026 05:27:25 +0000 (0:00:02.142) 0:03:40.818 ******* 2026-02-08 05:27:30.679815 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:27:30.679829 | orchestrator | 2026-02-08 05:27:30.679841 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-08 05:27:30.679853 | orchestrator | Sunday 08 February 2026 05:27:26 +0000 (0:00:01.249) 0:03:42.068 ******* 2026-02-08 05:27:30.679865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-08 05:27:30.679887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-08 05:27:43.337652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-08 05:27:43.337746 | orchestrator | 2026-02-08 05:27:43.337799 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-08 05:27:43.337809 | orchestrator | Sunday 08 February 2026 05:27:30 +0000 (0:00:03.893) 0:03:45.961 ******* 2026-02-08 05:27:43.337830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-08 05:27:43.337839 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:43.337848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-08 05:27:43.337872 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:43.337895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-08 05:27:43.337903 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:43.337910 | orchestrator | 2026-02-08 05:27:43.337917 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-08 05:27:43.337924 | orchestrator | Sunday 08 February 2026 05:27:31 +0000 (0:00:00.579) 0:03:46.541 ******* 2026-02-08 05:27:43.337933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:27:43.337942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:27:43.337951 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:43.337962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:27:43.337970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:27:43.337976 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:43.337983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:27:43.337990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:27:43.337997 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:43.338004 | orchestrator | 2026-02-08 05:27:43.338011 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-08 05:27:43.338060 | orchestrator | Sunday 08 February 2026 05:27:32 +0000 (0:00:01.183) 0:03:47.724 ******* 2026-02-08 05:27:43.338067 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:27:43.338075 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:27:43.338088 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:27:43.338094 | orchestrator | 2026-02-08 05:27:43.338101 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-08 05:27:43.338108 | orchestrator | Sunday 08 February 2026 05:27:33 +0000 (0:00:01.246) 0:03:48.971 ******* 2026-02-08 05:27:43.338115 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:27:43.338122 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:27:43.338129 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:27:43.338135 | orchestrator | 2026-02-08 05:27:43.338142 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-08 05:27:43.338149 | orchestrator | Sunday 08 February 2026 05:27:35 +0000 (0:00:02.315) 0:03:51.287 ******* 2026-02-08 05:27:43.338156 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:27:43.338163 | orchestrator | 2026-02-08 05:27:43.338170 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-08 05:27:43.338177 | orchestrator | Sunday 08 February 2026 05:27:37 +0000 (0:00:01.579) 0:03:52.866 ******* 2026-02-08 05:27:43.338190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:27:43.477169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:27:43.477326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:27:43.477384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:27:43.477405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:27:43.477454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:27:43.477486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:27:43.477507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:27:43.477537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:27:43.477556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:27:43.477589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:27:44.233626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:27:44.233729 | orchestrator | 2026-02-08 05:27:44.233747 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-08 05:27:44.233795 | orchestrator | Sunday 08 February 2026 05:27:43 +0000 (0:00:05.898) 0:03:58.764 ******* 2026-02-08 05:27:44.233829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:27:44.233865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:27:44.233880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:27:44.233910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:27:44.233923 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:44.233941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:27:44.233962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:27:44.233974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:27:44.233987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:27:44.233999 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:44.234071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:27:57.149888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:27:57.150089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-08 05:27:57.150120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-08 05:27:57.150141 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:57.150174 | orchestrator | 2026-02-08 05:27:57.150195 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-08 05:27:57.150215 | orchestrator | Sunday 08 February 2026 05:27:44 +0000 (0:00:00.876) 0:03:59.641 ******* 2026-02-08 05:27:57.150233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150314 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:27:57.150333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150465 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:27:57.150484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:27:57.150563 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:27:57.150582 | orchestrator | 2026-02-08 05:27:57.150601 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-08 05:27:57.150622 | orchestrator | Sunday 08 February 2026 05:27:46 +0000 (0:00:01.775) 0:04:01.416 ******* 2026-02-08 05:27:57.150640 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:27:57.150661 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:27:57.150680 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:27:57.150698 | orchestrator | 2026-02-08 05:27:57.150712 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-08 05:27:57.150723 | orchestrator | Sunday 08 February 2026 05:27:47 +0000 (0:00:01.271) 0:04:02.688 ******* 2026-02-08 05:27:57.150734 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:27:57.150744 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:27:57.150755 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:27:57.150798 | orchestrator | 2026-02-08 05:27:57.150809 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-08 05:27:57.150820 | orchestrator | Sunday 08 February 2026 05:27:49 +0000 (0:00:02.253) 0:04:04.941 ******* 2026-02-08 05:27:57.150831 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:27:57.150842 | orchestrator | 2026-02-08 05:27:57.150853 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-08 05:27:57.150864 | orchestrator | Sunday 08 February 2026 05:27:51 +0000 (0:00:02.051) 0:04:06.992 ******* 2026-02-08 05:27:57.150876 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-08 05:27:57.150897 | orchestrator | 2026-02-08 05:27:57.150915 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-08 05:27:57.150934 | orchestrator | Sunday 08 February 2026 05:27:52 +0000 (0:00:00.930) 0:04:07.923 ******* 2026-02-08 05:27:57.150956 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-08 05:27:57.150994 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-08 05:27:57.151031 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-08 05:28:11.162137 | orchestrator | 2026-02-08 05:28:11.162255 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-08 05:28:11.162274 | orchestrator | Sunday 08 February 2026 05:27:57 +0000 (0:00:04.506) 0:04:12.430 ******* 2026-02-08 05:28:11.162289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 05:28:11.162303 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:11.162317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 05:28:11.162329 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:11.162341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 05:28:11.162353 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:11.162364 | orchestrator | 2026-02-08 05:28:11.162376 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-08 05:28:11.162388 | orchestrator | Sunday 08 February 2026 05:27:58 +0000 (0:00:01.528) 0:04:13.958 ******* 2026-02-08 05:28:11.162400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 05:28:11.162441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 05:28:11.162455 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:11.162466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 05:28:11.162478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 05:28:11.162489 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:11.162500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 05:28:11.162511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-08 05:28:11.162522 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:11.162533 | orchestrator | 2026-02-08 05:28:11.162545 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-08 05:28:11.162556 | orchestrator | Sunday 08 February 2026 05:28:00 +0000 (0:00:01.665) 0:04:15.624 ******* 2026-02-08 05:28:11.162567 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:28:11.162579 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:28:11.162589 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:28:11.162600 | orchestrator | 2026-02-08 05:28:11.162611 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-08 05:28:11.162641 | orchestrator | Sunday 08 February 2026 05:28:03 +0000 (0:00:03.215) 0:04:18.839 ******* 2026-02-08 05:28:11.162654 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:28:11.162666 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:28:11.162697 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:28:11.162710 | orchestrator | 2026-02-08 05:28:11.162723 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-08 05:28:11.162736 | orchestrator | Sunday 08 February 2026 05:28:06 +0000 (0:00:02.954) 0:04:21.794 ******* 2026-02-08 05:28:11.162749 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-08 05:28:11.162763 | orchestrator | 2026-02-08 05:28:11.162803 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-08 05:28:11.162816 | orchestrator | Sunday 08 February 2026 05:28:07 +0000 (0:00:01.318) 0:04:23.113 ******* 2026-02-08 05:28:11.162831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 05:28:11.162844 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:11.162857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 05:28:11.162880 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:11.162893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 05:28:11.162906 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:11.162919 | orchestrator | 2026-02-08 05:28:11.162932 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-08 05:28:11.162944 | orchestrator | Sunday 08 February 2026 05:28:09 +0000 (0:00:01.496) 0:04:24.610 ******* 2026-02-08 05:28:11.162956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 05:28:11.162967 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:11.162978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 05:28:11.162989 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:11.163014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-08 05:28:37.277456 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:37.277563 | orchestrator | 2026-02-08 05:28:37.277576 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-08 05:28:37.277585 | orchestrator | Sunday 08 February 2026 05:28:11 +0000 (0:00:01.831) 0:04:26.441 ******* 2026-02-08 05:28:37.277593 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:37.277601 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:37.277608 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:37.277615 | orchestrator | 2026-02-08 05:28:37.277623 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-08 05:28:37.277630 | orchestrator | Sunday 08 February 2026 05:28:12 +0000 (0:00:01.784) 0:04:28.225 ******* 2026-02-08 05:28:37.277638 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:28:37.277646 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:28:37.277653 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:28:37.277660 | orchestrator | 2026-02-08 05:28:37.277667 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-08 05:28:37.277696 | orchestrator | Sunday 08 February 2026 05:28:15 +0000 (0:00:02.626) 0:04:30.852 ******* 2026-02-08 05:28:37.277704 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:28:37.277711 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:28:37.277718 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:28:37.277725 | orchestrator | 2026-02-08 05:28:37.277732 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-08 05:28:37.277739 | orchestrator | Sunday 08 February 2026 05:28:19 +0000 (0:00:03.509) 0:04:34.361 ******* 2026-02-08 05:28:37.277747 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-08 05:28:37.277756 | orchestrator | 2026-02-08 05:28:37.277831 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-08 05:28:37.277841 | orchestrator | Sunday 08 February 2026 05:28:20 +0000 (0:00:01.606) 0:04:35.967 ******* 2026-02-08 05:28:37.277851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 05:28:37.277860 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:37.277869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 05:28:37.277876 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:37.277884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 05:28:37.277895 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:37.277903 | orchestrator | 2026-02-08 05:28:37.277910 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-08 05:28:37.277919 | orchestrator | Sunday 08 February 2026 05:28:22 +0000 (0:00:01.438) 0:04:37.406 ******* 2026-02-08 05:28:37.277927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 05:28:37.277934 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:37.277970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 05:28:37.277986 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:37.277995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-08 05:28:37.278003 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:37.278012 | orchestrator | 2026-02-08 05:28:37.278071 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-08 05:28:37.278080 | orchestrator | Sunday 08 February 2026 05:28:23 +0000 (0:00:01.521) 0:04:38.927 ******* 2026-02-08 05:28:37.278089 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:37.278097 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:37.278106 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:37.278114 | orchestrator | 2026-02-08 05:28:37.278122 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-08 05:28:37.278131 | orchestrator | Sunday 08 February 2026 05:28:25 +0000 (0:00:02.147) 0:04:41.075 ******* 2026-02-08 05:28:37.278140 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:28:37.278148 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:28:37.278157 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:28:37.278165 | orchestrator | 2026-02-08 05:28:37.278174 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-08 05:28:37.278183 | orchestrator | Sunday 08 February 2026 05:28:28 +0000 (0:00:02.462) 0:04:43.538 ******* 2026-02-08 05:28:37.278191 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:28:37.278199 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:28:37.278207 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:28:37.278215 | orchestrator | 2026-02-08 05:28:37.278224 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-08 05:28:37.278232 | orchestrator | Sunday 08 February 2026 05:28:31 +0000 (0:00:03.680) 0:04:47.219 ******* 2026-02-08 05:28:37.278240 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:28:37.278249 | orchestrator | 2026-02-08 05:28:37.278257 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-08 05:28:37.278265 | orchestrator | Sunday 08 February 2026 05:28:33 +0000 (0:00:01.688) 0:04:48.907 ******* 2026-02-08 05:28:37.278276 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 05:28:37.278287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 05:28:37.278313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 05:28:37.423423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 05:28:37.423547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:28:37.423573 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 05:28:37.423595 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-08 05:28:37.423641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 05:28:37.423700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 05:28:37.423722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 05:28:37.423740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 05:28:37.423759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 05:28:37.423807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:28:37.423840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 05:28:37.423865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:28:37.423884 | orchestrator | 2026-02-08 05:28:37.423919 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-08 05:28:38.071912 | orchestrator | Sunday 08 February 2026 05:28:37 +0000 (0:00:03.801) 0:04:52.709 ******* 2026-02-08 05:28:38.072018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 05:28:38.072040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 05:28:38.072054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 05:28:38.072067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 05:28:38.072104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:28:38.072116 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:38.072152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 05:28:38.072165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 05:28:38.072177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 05:28:38.072189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 05:28:38.072251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:28:38.072264 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:38.072281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-08 05:28:38.072303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-08 05:28:51.264774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-08 05:28:51.265034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-08 05:28:51.265064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-08 05:28:51.265115 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:51.265138 | orchestrator | 2026-02-08 05:28:51.265157 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-08 05:28:51.265176 | orchestrator | Sunday 08 February 2026 05:28:38 +0000 (0:00:00.796) 0:04:53.506 ******* 2026-02-08 05:28:51.265194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 05:28:51.265212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 05:28:51.265232 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:51.265249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 05:28:51.265267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 05:28:51.265284 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:51.265302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 05:28:51.265340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-08 05:28:51.265358 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:28:51.265375 | orchestrator | 2026-02-08 05:28:51.265393 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-08 05:28:51.265410 | orchestrator | Sunday 08 February 2026 05:28:39 +0000 (0:00:01.685) 0:04:55.192 ******* 2026-02-08 05:28:51.265427 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:28:51.265444 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:28:51.265460 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:28:51.265477 | orchestrator | 2026-02-08 05:28:51.265493 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-08 05:28:51.265509 | orchestrator | Sunday 08 February 2026 05:28:41 +0000 (0:00:01.279) 0:04:56.471 ******* 2026-02-08 05:28:51.265525 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:28:51.265540 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:28:51.265580 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:28:51.265597 | orchestrator | 2026-02-08 05:28:51.265612 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-08 05:28:51.265629 | orchestrator | Sunday 08 February 2026 05:28:43 +0000 (0:00:02.294) 0:04:58.765 ******* 2026-02-08 05:28:51.265646 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:28:51.265662 | orchestrator | 2026-02-08 05:28:51.265678 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-08 05:28:51.265694 | orchestrator | Sunday 08 February 2026 05:28:45 +0000 (0:00:01.766) 0:05:00.532 ******* 2026-02-08 05:28:51.265713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:28:51.265750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:28:51.265769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:28:51.265880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:28:52.347924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:28:52.348051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:28:52.348068 | orchestrator | 2026-02-08 05:28:52.348083 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-08 05:28:52.348095 | orchestrator | Sunday 08 February 2026 05:28:51 +0000 (0:00:06.011) 0:05:06.543 ******* 2026-02-08 05:28:52.348123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:28:52.348156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:28:52.348177 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:28:52.348190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:28:52.348203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:28:52.348215 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:28:52.348233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:28:52.348256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:29:00.331334 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:00.331445 | orchestrator | 2026-02-08 05:29:00.331464 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-08 05:29:00.331478 | orchestrator | Sunday 08 February 2026 05:28:52 +0000 (0:00:01.088) 0:05:07.632 ******* 2026-02-08 05:29:00.331491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:29:00.331506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-08 05:29:00.331521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-08 05:29:00.331534 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:00.331546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:29:00.331557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-08 05:29:00.331569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-08 05:29:00.331580 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:00.331592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:29:00.331620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-08 05:29:00.331632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-08 05:29:00.331644 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:00.331678 | orchestrator | 2026-02-08 05:29:00.331691 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-08 05:29:00.331702 | orchestrator | Sunday 08 February 2026 05:28:53 +0000 (0:00:01.424) 0:05:09.057 ******* 2026-02-08 05:29:00.331713 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:00.331725 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:00.331736 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:00.331746 | orchestrator | 2026-02-08 05:29:00.331757 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-08 05:29:00.331768 | orchestrator | Sunday 08 February 2026 05:28:54 +0000 (0:00:00.537) 0:05:09.594 ******* 2026-02-08 05:29:00.331779 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:00.331849 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:00.331862 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:00.331874 | orchestrator | 2026-02-08 05:29:00.331887 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-08 05:29:00.331900 | orchestrator | Sunday 08 February 2026 05:28:55 +0000 (0:00:01.575) 0:05:11.170 ******* 2026-02-08 05:29:00.331914 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:29:00.331927 | orchestrator | 2026-02-08 05:29:00.331939 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-08 05:29:00.331952 | orchestrator | Sunday 08 February 2026 05:28:57 +0000 (0:00:01.772) 0:05:12.942 ******* 2026-02-08 05:29:00.331989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-08 05:29:00.332008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 05:29:00.332024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:00.332045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-08 05:29:00.332070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:00.332093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 05:29:02.163195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 05:29:02.163268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:02.163276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:02.163280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 05:29:02.163312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-08 05:29:02.163317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 05:29:02.163338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:02.163343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:02.163347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 05:29:02.163354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:29:02.163363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-08 05:29:02.163368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:02.163376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.607518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 05:29:03.607623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:29:03.607679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-08 05:29:03.607694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.607707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.607719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 05:29:03.607751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:29:03.607765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-08 05:29:03.607876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.607893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.607905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 05:29:03.607924 | orchestrator | 2026-02-08 05:29:03.607945 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-08 05:29:03.607963 | orchestrator | Sunday 08 February 2026 05:29:02 +0000 (0:00:04.987) 0:05:17.929 ******* 2026-02-08 05:29:03.607992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-08 05:29:03.760921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 05:29:03.761045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.761079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.761094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 05:29:03.761108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:29:03.761138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-08 05:29:03.761168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.761179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.761196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 05:29:03.761209 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:03.761223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-08 05:29:03.761255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 05:29:03.761279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:03.761329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:04.267348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 05:29:04.267461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:29:04.267479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-08 05:29:04.267492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-08 05:29:04.267540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-08 05:29:04.267552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:04.267568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:04.267579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:04.267590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:04.267600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 05:29:04.267611 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:04.267623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-08 05:29:04.267650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:29:12.282902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-08 05:29:12.283038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:12.283059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:29:12.283072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-08 05:29:12.283085 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:12.283099 | orchestrator | 2026-02-08 05:29:12.283111 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-08 05:29:12.283145 | orchestrator | Sunday 08 February 2026 05:29:04 +0000 (0:00:01.624) 0:05:19.554 ******* 2026-02-08 05:29:12.283158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-08 05:29:12.283172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-08 05:29:12.283186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:29:12.283218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:29:12.283232 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:12.283276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-08 05:29:12.283296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-08 05:29:12.283309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:29:12.283320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:29:12.283332 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:12.283343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-08 05:29:12.283357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-08 05:29:12.283380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:29:12.283395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-08 05:29:12.283408 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:12.283421 | orchestrator | 2026-02-08 05:29:12.283435 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-08 05:29:12.283448 | orchestrator | Sunday 08 February 2026 05:29:05 +0000 (0:00:01.106) 0:05:20.661 ******* 2026-02-08 05:29:12.283461 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:12.283475 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:12.283488 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:12.283500 | orchestrator | 2026-02-08 05:29:12.283513 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-08 05:29:12.283525 | orchestrator | Sunday 08 February 2026 05:29:05 +0000 (0:00:00.494) 0:05:21.156 ******* 2026-02-08 05:29:12.283538 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:12.283551 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:12.283563 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:12.283577 | orchestrator | 2026-02-08 05:29:12.283590 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-08 05:29:12.283603 | orchestrator | Sunday 08 February 2026 05:29:07 +0000 (0:00:01.573) 0:05:22.729 ******* 2026-02-08 05:29:12.283616 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:29:12.283628 | orchestrator | 2026-02-08 05:29:12.283641 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-08 05:29:12.283655 | orchestrator | Sunday 08 February 2026 05:29:09 +0000 (0:00:01.894) 0:05:24.623 ******* 2026-02-08 05:29:12.283683 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:29:25.517735 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:29:25.517891 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:29:25.517902 | orchestrator | 2026-02-08 05:29:25.517909 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-08 05:29:25.517915 | orchestrator | Sunday 08 February 2026 05:29:12 +0000 (0:00:02.944) 0:05:27.568 ******* 2026-02-08 05:29:25.517921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:29:25.517926 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:25.517954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:29:25.517960 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:25.517965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:29:25.517974 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:25.517979 | orchestrator | 2026-02-08 05:29:25.517984 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-08 05:29:25.517988 | orchestrator | Sunday 08 February 2026 05:29:13 +0000 (0:00:00.906) 0:05:28.474 ******* 2026-02-08 05:29:25.517994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-08 05:29:25.518000 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:25.518005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-08 05:29:25.518009 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:25.518014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-08 05:29:25.518059 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:25.518064 | orchestrator | 2026-02-08 05:29:25.518069 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-08 05:29:25.518074 | orchestrator | Sunday 08 February 2026 05:29:13 +0000 (0:00:00.711) 0:05:29.186 ******* 2026-02-08 05:29:25.518078 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:25.518083 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:25.518087 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:25.518092 | orchestrator | 2026-02-08 05:29:25.518097 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-08 05:29:25.518101 | orchestrator | Sunday 08 February 2026 05:29:14 +0000 (0:00:00.516) 0:05:29.702 ******* 2026-02-08 05:29:25.518106 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:25.518111 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:25.518115 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:25.518120 | orchestrator | 2026-02-08 05:29:25.518124 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-08 05:29:25.518129 | orchestrator | Sunday 08 February 2026 05:29:16 +0000 (0:00:02.048) 0:05:31.751 ******* 2026-02-08 05:29:25.518134 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:29:25.518138 | orchestrator | 2026-02-08 05:29:25.518143 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-08 05:29:25.518148 | orchestrator | Sunday 08 February 2026 05:29:18 +0000 (0:00:01.543) 0:05:33.294 ******* 2026-02-08 05:29:25.518156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-08 05:29:25.518172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-08 05:29:26.237119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-08 05:29:26.237221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-08 05:29:26.237255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-08 05:29:26.237309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-08 05:29:26.237324 | orchestrator | 2026-02-08 05:29:26.237337 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-08 05:29:26.237350 | orchestrator | Sunday 08 February 2026 05:29:25 +0000 (0:00:07.502) 0:05:40.796 ******* 2026-02-08 05:29:26.237362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-08 05:29:26.237375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-08 05:29:26.237394 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:26.237412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-08 05:29:26.237434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-08 05:29:38.719557 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:38.719671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-08 05:29:38.719691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-08 05:29:38.719725 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:38.719735 | orchestrator | 2026-02-08 05:29:38.719746 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-08 05:29:38.719756 | orchestrator | Sunday 08 February 2026 05:29:26 +0000 (0:00:00.725) 0:05:41.521 ******* 2026-02-08 05:29:38.719767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-08 05:29:38.719779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-08 05:29:38.719790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:29:38.719847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:29:38.719859 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:38.719868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-08 05:29:38.719877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-08 05:29:38.719903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:29:38.719913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:29:38.719962 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:38.719972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-08 05:29:38.719981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-08 05:29:38.719990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:29:38.720008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-08 05:29:38.720017 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:38.720024 | orchestrator | 2026-02-08 05:29:38.720033 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-08 05:29:38.720042 | orchestrator | Sunday 08 February 2026 05:29:27 +0000 (0:00:01.034) 0:05:42.556 ******* 2026-02-08 05:29:38.720050 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:29:38.720059 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:29:38.720066 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:29:38.720075 | orchestrator | 2026-02-08 05:29:38.720084 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-08 05:29:38.720092 | orchestrator | Sunday 08 February 2026 05:29:28 +0000 (0:00:01.731) 0:05:44.287 ******* 2026-02-08 05:29:38.720100 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:29:38.720108 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:29:38.720116 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:29:38.720124 | orchestrator | 2026-02-08 05:29:38.720133 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-08 05:29:38.720141 | orchestrator | Sunday 08 February 2026 05:29:31 +0000 (0:00:02.511) 0:05:46.799 ******* 2026-02-08 05:29:38.720149 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:38.720158 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:38.720172 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:38.720180 | orchestrator | 2026-02-08 05:29:38.720188 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-08 05:29:38.720196 | orchestrator | Sunday 08 February 2026 05:29:31 +0000 (0:00:00.388) 0:05:47.188 ******* 2026-02-08 05:29:38.720204 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:38.720213 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:38.720221 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:38.720229 | orchestrator | 2026-02-08 05:29:38.720237 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-08 05:29:38.720245 | orchestrator | Sunday 08 February 2026 05:29:32 +0000 (0:00:00.383) 0:05:47.572 ******* 2026-02-08 05:29:38.720253 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:38.720262 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:38.720270 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:38.720278 | orchestrator | 2026-02-08 05:29:38.720286 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-08 05:29:38.720294 | orchestrator | Sunday 08 February 2026 05:29:32 +0000 (0:00:00.686) 0:05:48.258 ******* 2026-02-08 05:29:38.720302 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:38.720310 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:38.720317 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:38.720326 | orchestrator | 2026-02-08 05:29:38.720336 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-08 05:29:38.720345 | orchestrator | Sunday 08 February 2026 05:29:33 +0000 (0:00:00.357) 0:05:48.616 ******* 2026-02-08 05:29:38.720353 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:38.720361 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:29:38.720369 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:29:38.720378 | orchestrator | 2026-02-08 05:29:38.720387 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-08 05:29:38.720395 | orchestrator | Sunday 08 February 2026 05:29:33 +0000 (0:00:00.349) 0:05:48.965 ******* 2026-02-08 05:29:38.720404 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:29:38.720414 | orchestrator | 2026-02-08 05:29:38.720423 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-08 05:29:38.720430 | orchestrator | Sunday 08 February 2026 05:29:35 +0000 (0:00:01.994) 0:05:50.960 ******* 2026-02-08 05:29:38.720462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-08 05:29:41.415227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-08 05:29:41.415406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-08 05:29:41.415461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:29:41.415485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:29:41.415502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-08 05:29:41.415542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:29:41.415577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:29:41.415590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-08 05:29:41.415602 | orchestrator | 2026-02-08 05:29:41.415616 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-08 05:29:41.415629 | orchestrator | Sunday 08 February 2026 05:29:38 +0000 (0:00:03.042) 0:05:54.002 ******* 2026-02-08 05:29:41.415642 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:29:41.415654 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:29:41.415667 | orchestrator | } 2026-02-08 05:29:41.415681 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:29:41.415693 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:29:41.415706 | orchestrator | } 2026-02-08 05:29:41.415719 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:29:41.415732 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:29:41.415744 | orchestrator | } 2026-02-08 05:29:41.415756 | orchestrator | 2026-02-08 05:29:41.415770 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:29:41.415784 | orchestrator | Sunday 08 February 2026 05:29:39 +0000 (0:00:00.395) 0:05:54.398 ******* 2026-02-08 05:29:41.415797 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-08 05:29:41.415846 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-08 05:29:41.415881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-08 05:29:41.415896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:29:41.415919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:29:41.415933 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:29:41.415960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-08 05:31:24.906254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:31:24.906402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:31:24.906432 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:31:24.906475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-08 05:31:24.906497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-08 05:31:24.906548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-08 05:31:24.906569 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:31:24.906588 | orchestrator | 2026-02-08 05:31:24.906609 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-08 05:31:24.906630 | orchestrator | Sunday 08 February 2026 05:29:41 +0000 (0:00:02.300) 0:05:56.699 ******* 2026-02-08 05:31:24.906649 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:31:24.906669 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:31:24.906687 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:31:24.906705 | orchestrator | 2026-02-08 05:31:24.906724 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-08 05:31:24.906742 | orchestrator | Sunday 08 February 2026 05:29:42 +0000 (0:00:01.177) 0:05:57.877 ******* 2026-02-08 05:31:24.906762 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:31:24.906781 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:31:24.906799 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:31:24.906818 | orchestrator | 2026-02-08 05:31:24.906874 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-08 05:31:24.906892 | orchestrator | Sunday 08 February 2026 05:29:42 +0000 (0:00:00.406) 0:05:58.283 ******* 2026-02-08 05:31:24.906910 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:31:24.906930 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:31:24.906949 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:31:24.906968 | orchestrator | 2026-02-08 05:31:24.906989 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-08 05:31:24.907009 | orchestrator | Sunday 08 February 2026 05:29:48 +0000 (0:00:06.004) 0:06:04.288 ******* 2026-02-08 05:31:24.907056 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:31:24.907076 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:31:24.907089 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:31:24.907102 | orchestrator | 2026-02-08 05:31:24.907115 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-08 05:31:24.907126 | orchestrator | Sunday 08 February 2026 05:29:55 +0000 (0:00:06.059) 0:06:10.348 ******* 2026-02-08 05:31:24.907137 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:31:24.907148 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:31:24.907159 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:31:24.907170 | orchestrator | 2026-02-08 05:31:24.907181 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-08 05:31:24.907192 | orchestrator | Sunday 08 February 2026 05:30:01 +0000 (0:00:06.430) 0:06:16.778 ******* 2026-02-08 05:31:24.907204 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:31:24.907215 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:31:24.907225 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:31:24.907236 | orchestrator | 2026-02-08 05:31:24.907247 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-08 05:31:24.907257 | orchestrator | Sunday 08 February 2026 05:30:07 +0000 (0:00:06.362) 0:06:23.141 ******* 2026-02-08 05:31:24.907269 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:31:24.907280 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:31:24.907291 | orchestrator | 2026-02-08 05:31:24.907301 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-08 05:31:24.907327 | orchestrator | Sunday 08 February 2026 05:30:11 +0000 (0:00:03.691) 0:06:26.832 ******* 2026-02-08 05:31:24.907338 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:31:24.907349 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:31:24.907360 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:31:24.907371 | orchestrator | 2026-02-08 05:31:24.907382 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-08 05:31:24.907393 | orchestrator | Sunday 08 February 2026 05:30:23 +0000 (0:00:12.075) 0:06:38.907 ******* 2026-02-08 05:31:24.907403 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:31:24.907414 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:31:24.907425 | orchestrator | 2026-02-08 05:31:24.907436 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-08 05:31:24.907447 | orchestrator | Sunday 08 February 2026 05:30:27 +0000 (0:00:04.180) 0:06:43.088 ******* 2026-02-08 05:31:24.907458 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:31:24.907469 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:31:24.907480 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:31:24.907491 | orchestrator | 2026-02-08 05:31:24.907502 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-08 05:31:24.907512 | orchestrator | Sunday 08 February 2026 05:30:34 +0000 (0:00:06.410) 0:06:49.498 ******* 2026-02-08 05:31:24.907533 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:31:24.907544 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:31:24.907555 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:31:24.907566 | orchestrator | 2026-02-08 05:31:24.907577 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-08 05:31:24.907587 | orchestrator | Sunday 08 February 2026 05:30:40 +0000 (0:00:05.836) 0:06:55.335 ******* 2026-02-08 05:31:24.907598 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:31:24.907609 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:31:24.907620 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:31:24.907631 | orchestrator | 2026-02-08 05:31:24.907642 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-08 05:31:24.907653 | orchestrator | Sunday 08 February 2026 05:30:45 +0000 (0:00:05.843) 0:07:01.178 ******* 2026-02-08 05:31:24.907664 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:31:24.907675 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:31:24.907685 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:31:24.907696 | orchestrator | 2026-02-08 05:31:24.907707 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-08 05:31:24.907717 | orchestrator | Sunday 08 February 2026 05:30:51 +0000 (0:00:05.879) 0:07:07.057 ******* 2026-02-08 05:31:24.907728 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:31:24.907739 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:31:24.907750 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:31:24.907761 | orchestrator | 2026-02-08 05:31:24.907772 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-08 05:31:24.907783 | orchestrator | Sunday 08 February 2026 05:30:58 +0000 (0:00:06.444) 0:07:13.502 ******* 2026-02-08 05:31:24.907794 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:31:24.907805 | orchestrator | 2026-02-08 05:31:24.907815 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-08 05:31:24.907848 | orchestrator | Sunday 08 February 2026 05:31:00 +0000 (0:00:02.567) 0:07:16.070 ******* 2026-02-08 05:31:24.907861 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:31:24.907872 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:31:24.907883 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:31:24.907894 | orchestrator | 2026-02-08 05:31:24.907905 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-08 05:31:24.907916 | orchestrator | Sunday 08 February 2026 05:31:13 +0000 (0:00:12.368) 0:07:28.438 ******* 2026-02-08 05:31:24.907927 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:31:24.907938 | orchestrator | 2026-02-08 05:31:24.907956 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-08 05:31:24.907966 | orchestrator | Sunday 08 February 2026 05:31:17 +0000 (0:00:04.619) 0:07:33.058 ******* 2026-02-08 05:31:24.907977 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:31:24.907988 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:31:24.907999 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:31:24.908010 | orchestrator | 2026-02-08 05:31:24.908020 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-08 05:31:24.908031 | orchestrator | Sunday 08 February 2026 05:31:23 +0000 (0:00:06.117) 0:07:39.176 ******* 2026-02-08 05:31:24.908042 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:31:24.908053 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:31:24.908064 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:31:24.908075 | orchestrator | 2026-02-08 05:31:24.908086 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-08 05:31:24.908104 | orchestrator | Sunday 08 February 2026 05:31:24 +0000 (0:00:01.009) 0:07:40.185 ******* 2026-02-08 05:31:27.430993 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:31:27.431060 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:31:27.431067 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:31:27.431074 | orchestrator | 2026-02-08 05:31:27.431080 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:31:27.431089 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-08 05:31:27.431096 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-08 05:31:27.431102 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-08 05:31:27.431107 | orchestrator | 2026-02-08 05:31:27.431114 | orchestrator | 2026-02-08 05:31:27.431119 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:31:27.431125 | orchestrator | Sunday 08 February 2026 05:31:26 +0000 (0:00:01.618) 0:07:41.804 ******* 2026-02-08 05:31:27.431130 | orchestrator | =============================================================================== 2026-02-08 05:31:27.431136 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.37s 2026-02-08 05:31:27.431141 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.08s 2026-02-08 05:31:27.431147 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.50s 2026-02-08 05:31:27.431152 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 6.44s 2026-02-08 05:31:27.431158 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 6.43s 2026-02-08 05:31:27.431163 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 6.41s 2026-02-08 05:31:27.431169 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 6.36s 2026-02-08 05:31:27.431174 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.12s 2026-02-08 05:31:27.431179 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 6.06s 2026-02-08 05:31:27.431185 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.01s 2026-02-08 05:31:27.431190 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 6.00s 2026-02-08 05:31:27.431202 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.90s 2026-02-08 05:31:27.431208 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 5.88s 2026-02-08 05:31:27.431213 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 5.84s 2026-02-08 05:31:27.431219 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 5.84s 2026-02-08 05:31:27.431224 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.09s 2026-02-08 05:31:27.431245 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.99s 2026-02-08 05:31:27.431250 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.83s 2026-02-08 05:31:27.431256 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.65s 2026-02-08 05:31:27.431261 | orchestrator | loadbalancer : Wait for master proxysql to start ------------------------ 4.62s 2026-02-08 05:31:27.770315 | orchestrator | + osism apply -a upgrade opensearch 2026-02-08 05:31:29.887707 | orchestrator | 2026-02-08 05:31:29 | INFO  | Task 864a0931-5975-4f2e-8c0c-7fcfd2a1bf16 (opensearch) was prepared for execution. 2026-02-08 05:31:29.887803 | orchestrator | 2026-02-08 05:31:29 | INFO  | It takes a moment until task 864a0931-5975-4f2e-8c0c-7fcfd2a1bf16 (opensearch) has been started and output is visible here. 2026-02-08 05:31:48.423737 | orchestrator | 2026-02-08 05:31:48.423919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 05:31:48.423947 | orchestrator | 2026-02-08 05:31:48.423962 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 05:31:48.423976 | orchestrator | Sunday 08 February 2026 05:31:35 +0000 (0:00:01.408) 0:00:01.408 ******* 2026-02-08 05:31:48.423990 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:31:48.424005 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:31:48.424017 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:31:48.424030 | orchestrator | 2026-02-08 05:31:48.424043 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 05:31:48.424056 | orchestrator | Sunday 08 February 2026 05:31:37 +0000 (0:00:02.022) 0:00:03.430 ******* 2026-02-08 05:31:48.424072 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-08 05:31:48.424085 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-08 05:31:48.424100 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-08 05:31:48.424114 | orchestrator | 2026-02-08 05:31:48.424127 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-08 05:31:48.424140 | orchestrator | 2026-02-08 05:31:48.424154 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-08 05:31:48.424167 | orchestrator | Sunday 08 February 2026 05:31:39 +0000 (0:00:02.046) 0:00:05.477 ******* 2026-02-08 05:31:48.424181 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:31:48.424196 | orchestrator | 2026-02-08 05:31:48.424209 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-08 05:31:48.424223 | orchestrator | Sunday 08 February 2026 05:31:42 +0000 (0:00:02.309) 0:00:07.786 ******* 2026-02-08 05:31:48.424237 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-08 05:31:48.424251 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-08 05:31:48.424265 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-08 05:31:48.424279 | orchestrator | 2026-02-08 05:31:48.424294 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-08 05:31:48.424309 | orchestrator | Sunday 08 February 2026 05:31:44 +0000 (0:00:02.069) 0:00:09.856 ******* 2026-02-08 05:31:48.424328 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:31:48.424379 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:31:48.424410 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:31:48.424423 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:31:48.424435 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:31:48.424458 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:31:48.424468 | orchestrator | 2026-02-08 05:31:48.424478 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-08 05:31:48.424487 | orchestrator | Sunday 08 February 2026 05:31:46 +0000 (0:00:02.583) 0:00:12.439 ******* 2026-02-08 05:31:48.424497 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:31:48.424507 | orchestrator | 2026-02-08 05:31:48.424523 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-08 05:31:53.862666 | orchestrator | Sunday 08 February 2026 05:31:48 +0000 (0:00:01.647) 0:00:14.087 ******* 2026-02-08 05:31:53.862807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:31:53.862899 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:31:53.862956 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:31:53.862999 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:31:53.863049 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:31:53.863072 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:31:53.863107 | orchestrator | 2026-02-08 05:31:53.863127 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-08 05:31:53.863148 | orchestrator | Sunday 08 February 2026 05:31:52 +0000 (0:00:03.617) 0:00:17.705 ******* 2026-02-08 05:31:53.863175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:31:53.863213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:31:55.732750 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:31:55.732908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:31:55.732932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:31:55.732970 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:31:55.732998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:31:55.733032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:31:55.733045 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:31:55.733057 | orchestrator | 2026-02-08 05:31:55.733070 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-08 05:31:55.733083 | orchestrator | Sunday 08 February 2026 05:31:53 +0000 (0:00:01.827) 0:00:19.532 ******* 2026-02-08 05:31:55.733095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:31:55.733116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:31:55.733134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:31:55.733146 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:31:55.733165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:31:59.497069 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:31:59.497201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:31:59.497287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:31:59.497306 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:31:59.497319 | orchestrator | 2026-02-08 05:31:59.497331 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-08 05:31:59.497344 | orchestrator | Sunday 08 February 2026 05:31:55 +0000 (0:00:01.868) 0:00:21.401 ******* 2026-02-08 05:31:59.497356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:31:59.497387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:31:59.497408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:31:59.497426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:31:59.497448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:31:59.497480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:32:13.643035 | orchestrator | 2026-02-08 05:32:13.643148 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-08 05:32:13.643165 | orchestrator | Sunday 08 February 2026 05:31:59 +0000 (0:00:03.758) 0:00:25.159 ******* 2026-02-08 05:32:13.643178 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:32:13.643190 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:32:13.643201 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:32:13.643212 | orchestrator | 2026-02-08 05:32:13.643223 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-08 05:32:13.643234 | orchestrator | Sunday 08 February 2026 05:32:03 +0000 (0:00:03.554) 0:00:28.714 ******* 2026-02-08 05:32:13.643245 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:32:13.643256 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:32:13.643267 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:32:13.643278 | orchestrator | 2026-02-08 05:32:13.643289 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-08 05:32:13.643300 | orchestrator | Sunday 08 February 2026 05:32:06 +0000 (0:00:03.369) 0:00:32.083 ******* 2026-02-08 05:32:13.643313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:32:13.643345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:32:13.643358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-08 05:32:13.643411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:32:13.643432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:32:13.643446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-08 05:32:13.643458 | orchestrator | 2026-02-08 05:32:13.643477 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-08 05:32:13.643489 | orchestrator | Sunday 08 February 2026 05:32:10 +0000 (0:00:03.735) 0:00:35.818 ******* 2026-02-08 05:32:13.643501 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:32:13.643514 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:32:13.643542 | orchestrator | } 2026-02-08 05:32:13.643553 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:32:13.643565 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:32:13.643576 | orchestrator | } 2026-02-08 05:32:13.643589 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:32:13.643601 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:32:13.643614 | orchestrator | } 2026-02-08 05:32:13.643628 | orchestrator | 2026-02-08 05:32:13.643641 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:32:13.643654 | orchestrator | Sunday 08 February 2026 05:32:11 +0000 (0:00:01.408) 0:00:37.227 ******* 2026-02-08 05:32:13.643676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:35:26.544433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:35:26.544524 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:35:26.544546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:35:26.544568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:35:26.544573 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:35:26.544588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-08 05:35:26.544596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-08 05:35:26.544601 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:35:26.544606 | orchestrator | 2026-02-08 05:35:26.544611 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-08 05:35:26.544617 | orchestrator | Sunday 08 February 2026 05:32:13 +0000 (0:00:02.082) 0:00:39.310 ******* 2026-02-08 05:35:26.544622 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:35:26.544626 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:35:26.544631 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:35:26.544635 | orchestrator | 2026-02-08 05:35:26.544640 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-08 05:35:26.544651 | orchestrator | Sunday 08 February 2026 05:32:15 +0000 (0:00:01.635) 0:00:40.946 ******* 2026-02-08 05:35:26.544655 | orchestrator | 2026-02-08 05:35:26.544660 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-08 05:35:26.544664 | orchestrator | Sunday 08 February 2026 05:32:15 +0000 (0:00:00.474) 0:00:41.420 ******* 2026-02-08 05:35:26.544669 | orchestrator | 2026-02-08 05:35:26.544673 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-08 05:35:26.544678 | orchestrator | Sunday 08 February 2026 05:32:16 +0000 (0:00:00.420) 0:00:41.841 ******* 2026-02-08 05:35:26.544682 | orchestrator | 2026-02-08 05:35:26.544686 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-08 05:35:26.544691 | orchestrator | Sunday 08 February 2026 05:32:16 +0000 (0:00:00.789) 0:00:42.631 ******* 2026-02-08 05:35:26.544695 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:35:26.544700 | orchestrator | 2026-02-08 05:35:26.544705 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-08 05:35:26.544709 | orchestrator | Sunday 08 February 2026 05:32:20 +0000 (0:00:03.622) 0:00:46.253 ******* 2026-02-08 05:35:26.544713 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:35:26.544717 | orchestrator | 2026-02-08 05:35:26.544722 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-08 05:35:26.544726 | orchestrator | Sunday 08 February 2026 05:32:30 +0000 (0:00:10.246) 0:00:56.500 ******* 2026-02-08 05:35:26.544731 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:35:26.544735 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:35:26.544739 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:35:26.544744 | orchestrator | 2026-02-08 05:35:26.544748 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-08 05:35:26.544753 | orchestrator | Sunday 08 February 2026 05:33:41 +0000 (0:01:10.888) 0:02:07.388 ******* 2026-02-08 05:35:26.544757 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:35:26.544761 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:35:26.544766 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:35:26.544770 | orchestrator | 2026-02-08 05:35:26.544774 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-08 05:35:26.544779 | orchestrator | Sunday 08 February 2026 05:35:16 +0000 (0:01:35.047) 0:03:42.436 ******* 2026-02-08 05:35:26.544783 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:35:26.544788 | orchestrator | 2026-02-08 05:35:26.544792 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-08 05:35:26.544797 | orchestrator | Sunday 08 February 2026 05:35:18 +0000 (0:00:01.745) 0:03:44.181 ******* 2026-02-08 05:35:26.544801 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:35:26.544805 | orchestrator | 2026-02-08 05:35:26.544810 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-08 05:35:26.544814 | orchestrator | Sunday 08 February 2026 05:35:21 +0000 (0:00:03.417) 0:03:47.598 ******* 2026-02-08 05:35:26.544818 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:35:26.544823 | orchestrator | 2026-02-08 05:35:26.544827 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-08 05:35:26.544831 | orchestrator | Sunday 08 February 2026 05:35:25 +0000 (0:00:03.406) 0:03:51.004 ******* 2026-02-08 05:35:26.544836 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:35:26.544840 | orchestrator | 2026-02-08 05:35:26.544845 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-08 05:35:26.544852 | orchestrator | Sunday 08 February 2026 05:35:26 +0000 (0:00:01.204) 0:03:52.209 ******* 2026-02-08 05:35:28.897480 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:35:28.897583 | orchestrator | 2026-02-08 05:35:28.897600 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:35:28.897614 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:35:28.897652 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 05:35:28.897664 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 05:35:28.897675 | orchestrator | 2026-02-08 05:35:28.897686 | orchestrator | 2026-02-08 05:35:28.897697 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:35:28.897709 | orchestrator | Sunday 08 February 2026 05:35:28 +0000 (0:00:01.973) 0:03:54.183 ******* 2026-02-08 05:35:28.897720 | orchestrator | =============================================================================== 2026-02-08 05:35:28.897731 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 95.05s 2026-02-08 05:35:28.897742 | orchestrator | opensearch : Restart opensearch container ------------------------------ 70.89s 2026-02-08 05:35:28.897753 | orchestrator | opensearch : Perform a flush ------------------------------------------- 10.25s 2026-02-08 05:35:28.897763 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.76s 2026-02-08 05:35:28.897774 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.73s 2026-02-08 05:35:28.897833 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.62s 2026-02-08 05:35:28.897847 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.62s 2026-02-08 05:35:28.897858 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.55s 2026-02-08 05:35:28.897869 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.42s 2026-02-08 05:35:28.898126 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.41s 2026-02-08 05:35:28.898143 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.37s 2026-02-08 05:35:28.898156 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.58s 2026-02-08 05:35:28.898168 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.31s 2026-02-08 05:35:28.898180 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.08s 2026-02-08 05:35:28.898192 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.07s 2026-02-08 05:35:28.898205 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.05s 2026-02-08 05:35:28.898218 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.02s 2026-02-08 05:35:28.898231 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.97s 2026-02-08 05:35:28.898244 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.87s 2026-02-08 05:35:28.898257 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.83s 2026-02-08 05:35:29.267084 | orchestrator | + osism apply -a upgrade memcached 2026-02-08 05:35:31.389994 | orchestrator | 2026-02-08 05:35:31 | INFO  | Task 174b223a-50af-403a-bba6-6be32009ff5a (memcached) was prepared for execution. 2026-02-08 05:35:31.390153 | orchestrator | 2026-02-08 05:35:31 | INFO  | It takes a moment until task 174b223a-50af-403a-bba6-6be32009ff5a (memcached) has been started and output is visible here. 2026-02-08 05:36:05.238851 | orchestrator | 2026-02-08 05:36:05.238991 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 05:36:05.239003 | orchestrator | 2026-02-08 05:36:05.239011 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 05:36:05.239018 | orchestrator | Sunday 08 February 2026 05:35:37 +0000 (0:00:01.960) 0:00:01.960 ******* 2026-02-08 05:36:05.239025 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:36:05.239033 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:36:05.239040 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:36:05.239066 | orchestrator | 2026-02-08 05:36:05.239073 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 05:36:05.239080 | orchestrator | Sunday 08 February 2026 05:35:39 +0000 (0:00:01.950) 0:00:03.910 ******* 2026-02-08 05:36:05.239087 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-08 05:36:05.239094 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-08 05:36:05.239100 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-08 05:36:05.239106 | orchestrator | 2026-02-08 05:36:05.239113 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-08 05:36:05.239119 | orchestrator | 2026-02-08 05:36:05.239125 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-08 05:36:05.239132 | orchestrator | Sunday 08 February 2026 05:35:41 +0000 (0:00:01.966) 0:00:05.877 ******* 2026-02-08 05:36:05.239139 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:36:05.239145 | orchestrator | 2026-02-08 05:36:05.239151 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-08 05:36:05.239158 | orchestrator | Sunday 08 February 2026 05:35:43 +0000 (0:00:01.894) 0:00:07.771 ******* 2026-02-08 05:36:05.239164 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-08 05:36:05.239170 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-08 05:36:05.239177 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-08 05:36:05.239183 | orchestrator | 2026-02-08 05:36:05.239189 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-08 05:36:05.239195 | orchestrator | Sunday 08 February 2026 05:35:45 +0000 (0:00:02.084) 0:00:09.856 ******* 2026-02-08 05:36:05.239201 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-08 05:36:05.239208 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-08 05:36:05.239214 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-08 05:36:05.239220 | orchestrator | 2026-02-08 05:36:05.239226 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-08 05:36:05.239232 | orchestrator | Sunday 08 February 2026 05:35:48 +0000 (0:00:02.749) 0:00:12.606 ******* 2026-02-08 05:36:05.239263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-08 05:36:05.239272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-08 05:36:05.239300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-08 05:36:05.239313 | orchestrator | 2026-02-08 05:36:05.239319 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-08 05:36:05.239326 | orchestrator | Sunday 08 February 2026 05:35:50 +0000 (0:00:02.281) 0:00:14.888 ******* 2026-02-08 05:36:05.239332 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:36:05.239338 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:36:05.239345 | orchestrator | } 2026-02-08 05:36:05.239351 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:36:05.239358 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:36:05.239364 | orchestrator | } 2026-02-08 05:36:05.239370 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:36:05.239376 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:36:05.239382 | orchestrator | } 2026-02-08 05:36:05.239393 | orchestrator | 2026-02-08 05:36:05.239403 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:36:05.239414 | orchestrator | Sunday 08 February 2026 05:35:52 +0000 (0:00:01.407) 0:00:16.295 ******* 2026-02-08 05:36:05.239425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-08 05:36:05.239436 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:36:05.239447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-08 05:36:05.239459 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:36:05.239476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-08 05:36:05.239495 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:36:05.239506 | orchestrator | 2026-02-08 05:36:05.239515 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-08 05:36:05.239522 | orchestrator | Sunday 08 February 2026 05:35:54 +0000 (0:00:02.122) 0:00:18.418 ******* 2026-02-08 05:36:05.239530 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:36:05.239537 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:36:05.239545 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:36:05.239552 | orchestrator | 2026-02-08 05:36:05.239560 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:36:05.239569 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 05:36:05.239578 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 05:36:05.239585 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 05:36:05.239592 | orchestrator | 2026-02-08 05:36:05.239600 | orchestrator | 2026-02-08 05:36:05.239607 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:36:05.239620 | orchestrator | Sunday 08 February 2026 05:36:05 +0000 (0:00:10.911) 0:00:29.329 ******* 2026-02-08 05:36:05.583626 | orchestrator | =============================================================================== 2026-02-08 05:36:05.583738 | orchestrator | memcached : Restart memcached container -------------------------------- 10.91s 2026-02-08 05:36:05.583759 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.75s 2026-02-08 05:36:05.583774 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.28s 2026-02-08 05:36:05.583789 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.12s 2026-02-08 05:36:05.583801 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.08s 2026-02-08 05:36:05.583809 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.97s 2026-02-08 05:36:05.583817 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.95s 2026-02-08 05:36:05.583829 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.89s 2026-02-08 05:36:05.583843 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.41s 2026-02-08 05:36:05.909839 | orchestrator | + osism apply -a upgrade redis 2026-02-08 05:36:07.959040 | orchestrator | 2026-02-08 05:36:07 | INFO  | Task b1715855-377d-4821-8757-189283e65673 (redis) was prepared for execution. 2026-02-08 05:36:07.959164 | orchestrator | 2026-02-08 05:36:07 | INFO  | It takes a moment until task b1715855-377d-4821-8757-189283e65673 (redis) has been started and output is visible here. 2026-02-08 05:36:20.248977 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-08 05:36:20.249093 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-08 05:36:20.249122 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-08 05:36:20.249135 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-08 05:36:20.249158 | orchestrator | 2026-02-08 05:36:20.249170 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 05:36:20.249182 | orchestrator | 2026-02-08 05:36:20.249193 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 05:36:20.249205 | orchestrator | Sunday 08 February 2026 05:36:13 +0000 (0:00:01.269) 0:00:01.269 ******* 2026-02-08 05:36:20.249242 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:36:20.249255 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:36:20.249266 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:36:20.249277 | orchestrator | 2026-02-08 05:36:20.249288 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 05:36:20.249299 | orchestrator | Sunday 08 February 2026 05:36:14 +0000 (0:00:00.925) 0:00:02.195 ******* 2026-02-08 05:36:20.249310 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-08 05:36:20.249322 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-08 05:36:20.249333 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-08 05:36:20.249343 | orchestrator | 2026-02-08 05:36:20.249354 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-08 05:36:20.249365 | orchestrator | 2026-02-08 05:36:20.249376 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-08 05:36:20.249402 | orchestrator | Sunday 08 February 2026 05:36:15 +0000 (0:00:00.938) 0:00:03.133 ******* 2026-02-08 05:36:20.249413 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:36:20.249425 | orchestrator | 2026-02-08 05:36:20.249436 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-08 05:36:20.249447 | orchestrator | Sunday 08 February 2026 05:36:16 +0000 (0:00:01.003) 0:00:04.137 ******* 2026-02-08 05:36:20.249461 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249480 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249494 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249509 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249542 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249568 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249583 | orchestrator | 2026-02-08 05:36:20.249597 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-08 05:36:20.249609 | orchestrator | Sunday 08 February 2026 05:36:18 +0000 (0:00:01.453) 0:00:05.590 ******* 2026-02-08 05:36:20.249622 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249636 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249650 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249663 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:20.249691 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224393 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224495 | orchestrator | 2026-02-08 05:36:25.224528 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-08 05:36:25.224541 | orchestrator | Sunday 08 February 2026 05:36:20 +0000 (0:00:02.119) 0:00:07.709 ******* 2026-02-08 05:36:25.224554 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224566 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224577 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224619 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224647 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224658 | orchestrator | 2026-02-08 05:36:25.224668 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-08 05:36:25.224684 | orchestrator | Sunday 08 February 2026 05:36:23 +0000 (0:00:02.881) 0:00:10.591 ******* 2026-02-08 05:36:25.224695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:25.224764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-08 05:36:47.998842 | orchestrator | 2026-02-08 05:36:47.999032 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-08 05:36:47.999053 | orchestrator | Sunday 08 February 2026 05:36:25 +0000 (0:00:02.091) 0:00:12.682 ******* 2026-02-08 05:36:47.999066 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:36:47.999079 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:36:47.999090 | orchestrator | } 2026-02-08 05:36:47.999102 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:36:47.999114 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:36:47.999125 | orchestrator | } 2026-02-08 05:36:47.999136 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:36:47.999147 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:36:47.999157 | orchestrator | } 2026-02-08 05:36:47.999169 | orchestrator | 2026-02-08 05:36:47.999180 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:36:47.999236 | orchestrator | Sunday 08 February 2026 05:36:25 +0000 (0:00:00.581) 0:00:13.264 ******* 2026-02-08 05:36:47.999252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-08 05:36:47.999267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-08 05:36:47.999302 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-08 05:36:47.999315 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-08 05:36:47.999341 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:36:47.999354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-08 05:36:47.999369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-08 05:36:47.999382 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:36:47.999416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-08 05:36:47.999436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-08 05:36:47.999451 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:36:47.999464 | orchestrator | 2026-02-08 05:36:47.999477 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-08 05:36:47.999491 | orchestrator | Sunday 08 February 2026 05:36:26 +0000 (0:00:01.039) 0:00:14.303 ******* 2026-02-08 05:36:47.999504 | orchestrator | 2026-02-08 05:36:47.999517 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-08 05:36:47.999530 | orchestrator | Sunday 08 February 2026 05:36:26 +0000 (0:00:00.073) 0:00:14.377 ******* 2026-02-08 05:36:47.999542 | orchestrator | 2026-02-08 05:36:47.999555 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-08 05:36:47.999576 | orchestrator | Sunday 08 February 2026 05:36:26 +0000 (0:00:00.073) 0:00:14.450 ******* 2026-02-08 05:36:47.999589 | orchestrator | 2026-02-08 05:36:47.999602 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-08 05:36:47.999614 | orchestrator | Sunday 08 February 2026 05:36:27 +0000 (0:00:00.075) 0:00:14.526 ******* 2026-02-08 05:36:47.999627 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:36:47.999640 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:36:47.999653 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:36:47.999666 | orchestrator | 2026-02-08 05:36:47.999679 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-08 05:36:47.999692 | orchestrator | Sunday 08 February 2026 05:36:37 +0000 (0:00:09.977) 0:00:24.503 ******* 2026-02-08 05:36:47.999704 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:36:47.999715 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:36:47.999726 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:36:47.999737 | orchestrator | 2026-02-08 05:36:47.999748 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:36:47.999760 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 05:36:47.999773 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 05:36:47.999784 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 05:36:47.999795 | orchestrator | 2026-02-08 05:36:47.999805 | orchestrator | 2026-02-08 05:36:47.999816 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:36:47.999827 | orchestrator | Sunday 08 February 2026 05:36:47 +0000 (0:00:10.506) 0:00:35.010 ******* 2026-02-08 05:36:47.999838 | orchestrator | =============================================================================== 2026-02-08 05:36:47.999849 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.51s 2026-02-08 05:36:47.999859 | orchestrator | redis : Restart redis container ----------------------------------------- 9.98s 2026-02-08 05:36:47.999870 | orchestrator | redis : Copying over redis config files --------------------------------- 2.88s 2026-02-08 05:36:47.999881 | orchestrator | redis : Copying over default config.json files -------------------------- 2.12s 2026-02-08 05:36:47.999914 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.09s 2026-02-08 05:36:47.999924 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.45s 2026-02-08 05:36:47.999935 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.04s 2026-02-08 05:36:47.999946 | orchestrator | redis : include_tasks --------------------------------------------------- 1.00s 2026-02-08 05:36:47.999957 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-02-08 05:36:47.999967 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2026-02-08 05:36:47.999978 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.58s 2026-02-08 05:36:47.999989 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2026-02-08 05:36:48.519561 | orchestrator | + osism apply -a upgrade mariadb 2026-02-08 05:36:51.002750 | orchestrator | 2026-02-08 05:36:51 | INFO  | Task 96d986f1-b17b-49b4-9859-b4bce45c153c (mariadb) was prepared for execution. 2026-02-08 05:36:51.002854 | orchestrator | 2026-02-08 05:36:51 | INFO  | It takes a moment until task 96d986f1-b17b-49b4-9859-b4bce45c153c (mariadb) has been started and output is visible here. 2026-02-08 05:37:05.441975 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-08 05:37:05.442153 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-08 05:37:05.442219 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-08 05:37:05.442243 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-08 05:37:05.442264 | orchestrator | 2026-02-08 05:37:05.442275 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 05:37:05.442285 | orchestrator | 2026-02-08 05:37:05.442295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 05:37:05.442305 | orchestrator | Sunday 08 February 2026 05:36:56 +0000 (0:00:01.093) 0:00:01.093 ******* 2026-02-08 05:37:05.442315 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:37:05.442327 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:37:05.442336 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:37:05.442346 | orchestrator | 2026-02-08 05:37:05.442356 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 05:37:05.442366 | orchestrator | Sunday 08 February 2026 05:36:57 +0000 (0:00:00.857) 0:00:01.950 ******* 2026-02-08 05:37:05.442376 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-08 05:37:05.442386 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-08 05:37:05.442396 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-08 05:37:05.442406 | orchestrator | 2026-02-08 05:37:05.442416 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-08 05:37:05.442428 | orchestrator | 2026-02-08 05:37:05.442444 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-08 05:37:05.442460 | orchestrator | Sunday 08 February 2026 05:36:58 +0000 (0:00:01.137) 0:00:03.088 ******* 2026-02-08 05:37:05.442477 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:37:05.442494 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-08 05:37:05.442510 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-08 05:37:05.442526 | orchestrator | 2026-02-08 05:37:05.442543 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-08 05:37:05.442561 | orchestrator | Sunday 08 February 2026 05:36:58 +0000 (0:00:00.461) 0:00:03.549 ******* 2026-02-08 05:37:05.442577 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:37:05.442595 | orchestrator | 2026-02-08 05:37:05.442632 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-08 05:37:05.442650 | orchestrator | Sunday 08 February 2026 05:37:00 +0000 (0:00:01.320) 0:00:04.869 ******* 2026-02-08 05:37:05.442690 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 05:37:05.442764 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 05:37:05.442789 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 05:37:05.442818 | orchestrator | 2026-02-08 05:37:05.442838 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-08 05:37:05.442855 | orchestrator | Sunday 08 February 2026 05:37:03 +0000 (0:00:03.302) 0:00:08.172 ******* 2026-02-08 05:37:05.442868 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:05.442879 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:05.442918 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:37:05.442929 | orchestrator | 2026-02-08 05:37:05.442939 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-08 05:37:05.442949 | orchestrator | Sunday 08 February 2026 05:37:04 +0000 (0:00:00.605) 0:00:08.778 ******* 2026-02-08 05:37:05.442958 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:05.442968 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:05.442978 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:37:05.442988 | orchestrator | 2026-02-08 05:37:05.442998 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-08 05:37:05.443016 | orchestrator | Sunday 08 February 2026 05:37:05 +0000 (0:00:01.273) 0:00:10.052 ******* 2026-02-08 05:37:18.331251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 05:37:18.331368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 05:37:18.331439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 05:37:18.331456 | orchestrator | 2026-02-08 05:37:18.331471 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-08 05:37:18.331484 | orchestrator | Sunday 08 February 2026 05:37:09 +0000 (0:00:03.638) 0:00:13.690 ******* 2026-02-08 05:37:18.331495 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:18.331507 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:18.331518 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:37:18.331530 | orchestrator | 2026-02-08 05:37:18.331541 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-08 05:37:18.331553 | orchestrator | Sunday 08 February 2026 05:37:10 +0000 (0:00:01.076) 0:00:14.767 ******* 2026-02-08 05:37:18.331564 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:37:18.331575 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:37:18.331585 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:37:18.331596 | orchestrator | 2026-02-08 05:37:18.331607 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-08 05:37:18.331618 | orchestrator | Sunday 08 February 2026 05:37:14 +0000 (0:00:04.217) 0:00:18.984 ******* 2026-02-08 05:37:18.331630 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:37:18.331649 | orchestrator | 2026-02-08 05:37:18.331661 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-08 05:37:18.331672 | orchestrator | Sunday 08 February 2026 05:37:15 +0000 (0:00:01.217) 0:00:20.202 ******* 2026-02-08 05:37:18.331693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:20.951361 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:20.951489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:20.951525 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:20.951536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:20.951545 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:20.951553 | orchestrator | 2026-02-08 05:37:20.951563 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-08 05:37:20.951572 | orchestrator | Sunday 08 February 2026 05:37:18 +0000 (0:00:02.739) 0:00:22.942 ******* 2026-02-08 05:37:20.951602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:20.951620 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:20.951630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:20.951638 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:20.951659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:27.936964 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:27.937074 | orchestrator | 2026-02-08 05:37:27.937091 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-08 05:37:27.937130 | orchestrator | Sunday 08 February 2026 05:37:20 +0000 (0:00:02.616) 0:00:25.558 ******* 2026-02-08 05:37:27.937147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:27.937162 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:27.937188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:27.937201 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:27.937245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:27.937259 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:27.937270 | orchestrator | 2026-02-08 05:37:27.937282 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-08 05:37:27.937293 | orchestrator | Sunday 08 February 2026 05:37:24 +0000 (0:00:03.635) 0:00:29.193 ******* 2026-02-08 05:37:27.937330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 05:37:27.937354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 05:37:32.056750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-08 05:37:32.056930 | orchestrator | 2026-02-08 05:37:32.056953 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-08 05:37:32.056964 | orchestrator | Sunday 08 February 2026 05:37:27 +0000 (0:00:03.356) 0:00:32.550 ******* 2026-02-08 05:37:32.056974 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:37:32.056985 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:37:32.057015 | orchestrator | } 2026-02-08 05:37:32.057025 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:37:32.057033 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:37:32.057042 | orchestrator | } 2026-02-08 05:37:32.057051 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:37:32.057060 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:37:32.057068 | orchestrator | } 2026-02-08 05:37:32.057077 | orchestrator | 2026-02-08 05:37:32.057086 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:37:32.057095 | orchestrator | Sunday 08 February 2026 05:37:28 +0000 (0:00:00.379) 0:00:32.930 ******* 2026-02-08 05:37:32.057124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:32.057137 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:32.057155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:32.057171 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:32.057181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:32.057191 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:32.057200 | orchestrator | 2026-02-08 05:37:32.057209 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-08 05:37:32.057229 | orchestrator | Sunday 08 February 2026 05:37:32 +0000 (0:00:03.731) 0:00:36.661 ******* 2026-02-08 05:37:41.451078 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451192 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451206 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451217 | orchestrator | 2026-02-08 05:37:41.451229 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-08 05:37:41.451240 | orchestrator | Sunday 08 February 2026 05:37:32 +0000 (0:00:00.382) 0:00:37.043 ******* 2026-02-08 05:37:41.451251 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451260 | orchestrator | 2026-02-08 05:37:41.451270 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-08 05:37:41.451280 | orchestrator | Sunday 08 February 2026 05:37:32 +0000 (0:00:00.135) 0:00:37.178 ******* 2026-02-08 05:37:41.451290 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451300 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451309 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451319 | orchestrator | 2026-02-08 05:37:41.451329 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-08 05:37:41.451339 | orchestrator | Sunday 08 February 2026 05:37:32 +0000 (0:00:00.357) 0:00:37.536 ******* 2026-02-08 05:37:41.451348 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451358 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451367 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451377 | orchestrator | 2026-02-08 05:37:41.451387 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-08 05:37:41.451417 | orchestrator | Sunday 08 February 2026 05:37:33 +0000 (0:00:00.606) 0:00:38.143 ******* 2026-02-08 05:37:41.451427 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451437 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451454 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451464 | orchestrator | 2026-02-08 05:37:41.451474 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-08 05:37:41.451484 | orchestrator | Sunday 08 February 2026 05:37:33 +0000 (0:00:00.330) 0:00:38.473 ******* 2026-02-08 05:37:41.451507 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451518 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451527 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451535 | orchestrator | 2026-02-08 05:37:41.451542 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-08 05:37:41.451550 | orchestrator | Sunday 08 February 2026 05:37:34 +0000 (0:00:00.330) 0:00:38.804 ******* 2026-02-08 05:37:41.451558 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451566 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451574 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451582 | orchestrator | 2026-02-08 05:37:41.451590 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-08 05:37:41.451598 | orchestrator | Sunday 08 February 2026 05:37:34 +0000 (0:00:00.351) 0:00:39.156 ******* 2026-02-08 05:37:41.451606 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451614 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451622 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451630 | orchestrator | 2026-02-08 05:37:41.451638 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-08 05:37:41.451646 | orchestrator | Sunday 08 February 2026 05:37:35 +0000 (0:00:00.630) 0:00:39.786 ******* 2026-02-08 05:37:41.451654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 05:37:41.451662 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 05:37:41.451670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 05:37:41.451678 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451686 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 05:37:41.451694 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 05:37:41.451701 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 05:37:41.451709 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451717 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 05:37:41.451724 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 05:37:41.451732 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 05:37:41.451740 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451748 | orchestrator | 2026-02-08 05:37:41.451756 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-08 05:37:41.451764 | orchestrator | Sunday 08 February 2026 05:37:35 +0000 (0:00:00.425) 0:00:40.212 ******* 2026-02-08 05:37:41.451771 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451779 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451787 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451795 | orchestrator | 2026-02-08 05:37:41.451803 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-08 05:37:41.451811 | orchestrator | Sunday 08 February 2026 05:37:35 +0000 (0:00:00.369) 0:00:40.581 ******* 2026-02-08 05:37:41.451819 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451826 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451834 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451842 | orchestrator | 2026-02-08 05:37:41.451850 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-08 05:37:41.451864 | orchestrator | Sunday 08 February 2026 05:37:36 +0000 (0:00:00.534) 0:00:41.116 ******* 2026-02-08 05:37:41.451872 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451880 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451888 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451919 | orchestrator | 2026-02-08 05:37:41.451930 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-08 05:37:41.451938 | orchestrator | Sunday 08 February 2026 05:37:36 +0000 (0:00:00.347) 0:00:41.463 ******* 2026-02-08 05:37:41.451946 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.451955 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.451963 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.451970 | orchestrator | 2026-02-08 05:37:41.451978 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-08 05:37:41.452001 | orchestrator | Sunday 08 February 2026 05:37:37 +0000 (0:00:00.346) 0:00:41.810 ******* 2026-02-08 05:37:41.452010 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.452018 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.452026 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.452034 | orchestrator | 2026-02-08 05:37:41.452042 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-08 05:37:41.452050 | orchestrator | Sunday 08 February 2026 05:37:37 +0000 (0:00:00.380) 0:00:42.191 ******* 2026-02-08 05:37:41.452058 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.452066 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.452073 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.452081 | orchestrator | 2026-02-08 05:37:41.452089 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-08 05:37:41.452097 | orchestrator | Sunday 08 February 2026 05:37:38 +0000 (0:00:00.580) 0:00:42.771 ******* 2026-02-08 05:37:41.452105 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.452113 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.452120 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.452128 | orchestrator | 2026-02-08 05:37:41.452136 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-08 05:37:41.452144 | orchestrator | Sunday 08 February 2026 05:37:38 +0000 (0:00:00.333) 0:00:43.105 ******* 2026-02-08 05:37:41.452152 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.452160 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:41.452168 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:41.452176 | orchestrator | 2026-02-08 05:37:41.452184 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-08 05:37:41.452191 | orchestrator | Sunday 08 February 2026 05:37:38 +0000 (0:00:00.349) 0:00:43.454 ******* 2026-02-08 05:37:41.452208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:41.452226 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:41.452242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:44.746284 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:44.746425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:44.746467 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:44.746484 | orchestrator | 2026-02-08 05:37:44.746503 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-08 05:37:44.746517 | orchestrator | Sunday 08 February 2026 05:37:41 +0000 (0:00:02.607) 0:00:46.062 ******* 2026-02-08 05:37:44.746527 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:44.746537 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:44.746547 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:44.746556 | orchestrator | 2026-02-08 05:37:44.746567 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-08 05:37:44.746576 | orchestrator | Sunday 08 February 2026 05:37:42 +0000 (0:00:00.579) 0:00:46.641 ******* 2026-02-08 05:37:44.746605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:44.746618 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:37:44.746635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:44.746652 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:37:44.746663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-08 05:37:44.746674 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:37:44.746684 | orchestrator | 2026-02-08 05:37:44.746693 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-08 05:37:44.746703 | orchestrator | Sunday 08 February 2026 05:37:44 +0000 (0:00:02.501) 0:00:49.142 ******* 2026-02-08 05:37:44.746719 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.653585 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.653714 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.653732 | orchestrator | 2026-02-08 05:39:45.653746 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-08 05:39:45.653759 | orchestrator | Sunday 08 February 2026 05:37:45 +0000 (0:00:00.768) 0:00:49.910 ******* 2026-02-08 05:39:45.653771 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.653782 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.653793 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.653804 | orchestrator | 2026-02-08 05:39:45.653816 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-08 05:39:45.653828 | orchestrator | Sunday 08 February 2026 05:37:45 +0000 (0:00:00.642) 0:00:50.553 ******* 2026-02-08 05:39:45.653839 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.653868 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.653902 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.653914 | orchestrator | 2026-02-08 05:39:45.653953 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-08 05:39:45.653965 | orchestrator | Sunday 08 February 2026 05:37:46 +0000 (0:00:00.377) 0:00:50.930 ******* 2026-02-08 05:39:45.653976 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.653988 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.653999 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.654009 | orchestrator | 2026-02-08 05:39:45.654099 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-08 05:39:45.654111 | orchestrator | Sunday 08 February 2026 05:37:47 +0000 (0:00:01.054) 0:00:51.985 ******* 2026-02-08 05:39:45.654122 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.654133 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.654144 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.654155 | orchestrator | 2026-02-08 05:39:45.654166 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-08 05:39:45.654177 | orchestrator | Sunday 08 February 2026 05:37:48 +0000 (0:00:00.943) 0:00:52.929 ******* 2026-02-08 05:39:45.654188 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.654200 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.654211 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.654222 | orchestrator | 2026-02-08 05:39:45.654232 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-08 05:39:45.654243 | orchestrator | Sunday 08 February 2026 05:37:49 +0000 (0:00:00.907) 0:00:53.836 ******* 2026-02-08 05:39:45.654254 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.654265 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.654276 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.654286 | orchestrator | 2026-02-08 05:39:45.654297 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-08 05:39:45.654308 | orchestrator | Sunday 08 February 2026 05:37:49 +0000 (0:00:00.374) 0:00:54.211 ******* 2026-02-08 05:39:45.654319 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.654330 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.654340 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.654351 | orchestrator | 2026-02-08 05:39:45.654362 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-08 05:39:45.654373 | orchestrator | Sunday 08 February 2026 05:37:49 +0000 (0:00:00.368) 0:00:54.580 ******* 2026-02-08 05:39:45.654383 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.654394 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.654405 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.654416 | orchestrator | 2026-02-08 05:39:45.654426 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-08 05:39:45.654437 | orchestrator | Sunday 08 February 2026 05:37:51 +0000 (0:00:01.135) 0:00:55.715 ******* 2026-02-08 05:39:45.654448 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.654459 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.654469 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.654480 | orchestrator | 2026-02-08 05:39:45.654491 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-08 05:39:45.654502 | orchestrator | Sunday 08 February 2026 05:37:51 +0000 (0:00:00.403) 0:00:56.118 ******* 2026-02-08 05:39:45.654513 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.654524 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.654534 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.654545 | orchestrator | 2026-02-08 05:39:45.654556 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-08 05:39:45.654567 | orchestrator | Sunday 08 February 2026 05:37:51 +0000 (0:00:00.369) 0:00:56.488 ******* 2026-02-08 05:39:45.654578 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.654589 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.654599 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.654610 | orchestrator | 2026-02-08 05:39:45.654631 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-08 05:39:45.654642 | orchestrator | Sunday 08 February 2026 05:37:54 +0000 (0:00:02.475) 0:00:58.964 ******* 2026-02-08 05:39:45.654653 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.654664 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.654674 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.654685 | orchestrator | 2026-02-08 05:39:45.654696 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-08 05:39:45.654706 | orchestrator | Sunday 08 February 2026 05:37:54 +0000 (0:00:00.636) 0:00:59.601 ******* 2026-02-08 05:39:45.654717 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.654728 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.654739 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.654749 | orchestrator | 2026-02-08 05:39:45.654760 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-08 05:39:45.654771 | orchestrator | Sunday 08 February 2026 05:37:55 +0000 (0:00:00.363) 0:00:59.964 ******* 2026-02-08 05:39:45.654782 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.654793 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.654804 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.654815 | orchestrator | 2026-02-08 05:39:45.654826 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-08 05:39:45.654837 | orchestrator | Sunday 08 February 2026 05:37:56 +0000 (0:00:00.735) 0:01:00.700 ******* 2026-02-08 05:39:45.654848 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.654859 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.654870 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.654898 | orchestrator | 2026-02-08 05:39:45.654910 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-08 05:39:45.654942 | orchestrator | Sunday 08 February 2026 05:37:56 +0000 (0:00:00.589) 0:01:01.289 ******* 2026-02-08 05:39:45.654953 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.654965 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-08 05:39:45.654976 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-08 05:39:45.654998 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.655009 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.655020 | orchestrator | 2026-02-08 05:39:45.655038 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-08 05:39:45.655049 | orchestrator | Sunday 08 February 2026 05:37:57 +0000 (0:00:00.832) 0:01:02.122 ******* 2026-02-08 05:39:45.655060 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:39:45.655071 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:39:45.655082 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:39:45.655093 | orchestrator | 2026-02-08 05:39:45.655103 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-08 05:39:45.655114 | orchestrator | Sunday 08 February 2026 05:37:58 +0000 (0:00:00.620) 0:01:02.743 ******* 2026-02-08 05:39:45.655125 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:39:45.655136 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.655147 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.655158 | orchestrator | 2026-02-08 05:39:45.655169 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-08 05:39:45.655180 | orchestrator | 2026-02-08 05:39:45.655191 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-08 05:39:45.655202 | orchestrator | Sunday 08 February 2026 05:37:58 +0000 (0:00:00.803) 0:01:03.546 ******* 2026-02-08 05:39:45.655212 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:39:45.655223 | orchestrator | 2026-02-08 05:39:45.655234 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-08 05:39:45.655245 | orchestrator | Sunday 08 February 2026 05:38:25 +0000 (0:00:26.837) 0:01:30.384 ******* 2026-02-08 05:39:45.655263 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.655274 | orchestrator | 2026-02-08 05:39:45.655285 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-08 05:39:45.655296 | orchestrator | Sunday 08 February 2026 05:38:31 +0000 (0:00:05.645) 0:01:36.030 ******* 2026-02-08 05:39:45.655306 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:39:45.655317 | orchestrator | 2026-02-08 05:39:45.655328 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-08 05:39:45.655339 | orchestrator | 2026-02-08 05:39:45.655350 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-08 05:39:45.655361 | orchestrator | Sunday 08 February 2026 05:38:34 +0000 (0:00:02.701) 0:01:38.731 ******* 2026-02-08 05:39:45.655372 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:39:45.655383 | orchestrator | 2026-02-08 05:39:45.655393 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-08 05:39:45.655404 | orchestrator | Sunday 08 February 2026 05:38:58 +0000 (0:00:24.726) 0:02:03.457 ******* 2026-02-08 05:39:45.655415 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.655426 | orchestrator | 2026-02-08 05:39:45.655436 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-08 05:39:45.655447 | orchestrator | Sunday 08 February 2026 05:39:04 +0000 (0:00:05.622) 0:02:09.079 ******* 2026-02-08 05:39:45.655458 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:39:45.655469 | orchestrator | 2026-02-08 05:39:45.655479 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-08 05:39:45.655490 | orchestrator | 2026-02-08 05:39:45.655501 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-08 05:39:45.655512 | orchestrator | Sunday 08 February 2026 05:39:07 +0000 (0:00:03.040) 0:02:12.120 ******* 2026-02-08 05:39:45.655523 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:39:45.655534 | orchestrator | 2026-02-08 05:39:45.655545 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-08 05:39:45.655556 | orchestrator | Sunday 08 February 2026 05:39:33 +0000 (0:00:25.775) 0:02:37.896 ******* 2026-02-08 05:39:45.655566 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.655577 | orchestrator | 2026-02-08 05:39:45.655588 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-08 05:39:45.655599 | orchestrator | Sunday 08 February 2026 05:39:38 +0000 (0:00:05.593) 0:02:43.489 ******* 2026-02-08 05:39:45.655610 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-08 05:39:45.655621 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-08 05:39:45.655632 | orchestrator | mariadb_bootstrap_restart 2026-02-08 05:39:45.655643 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:39:45.655653 | orchestrator | 2026-02-08 05:39:45.655664 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-08 05:39:45.655675 | orchestrator | skipping: no hosts matched 2026-02-08 05:39:45.655686 | orchestrator | 2026-02-08 05:39:45.655697 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-08 05:39:45.655708 | orchestrator | skipping: no hosts matched 2026-02-08 05:39:45.655719 | orchestrator | 2026-02-08 05:39:45.655730 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-08 05:39:45.655740 | orchestrator | 2026-02-08 05:39:45.655751 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-08 05:39:45.655762 | orchestrator | Sunday 08 February 2026 05:39:42 +0000 (0:00:03.342) 0:02:46.832 ******* 2026-02-08 05:39:45.655820 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:39:45.655832 | orchestrator | 2026-02-08 05:39:45.655843 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-08 05:39:45.655853 | orchestrator | Sunday 08 February 2026 05:39:43 +0000 (0:00:01.104) 0:02:47.936 ******* 2026-02-08 05:39:45.655864 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:39:45.655876 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:39:45.655902 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:40:24.851290 | orchestrator | 2026-02-08 05:40:24.851396 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-08 05:40:24.851413 | orchestrator | Sunday 08 February 2026 05:39:45 +0000 (0:00:02.323) 0:02:50.260 ******* 2026-02-08 05:40:24.851423 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:40:24.851431 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:40:24.851437 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:40:24.851443 | orchestrator | 2026-02-08 05:40:24.851450 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-08 05:40:24.851457 | orchestrator | Sunday 08 February 2026 05:39:47 +0000 (0:00:02.217) 0:02:52.477 ******* 2026-02-08 05:40:24.851463 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:40:24.851469 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:40:24.851475 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:40:24.851481 | orchestrator | 2026-02-08 05:40:24.851500 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-08 05:40:24.851506 | orchestrator | Sunday 08 February 2026 05:39:50 +0000 (0:00:02.239) 0:02:54.717 ******* 2026-02-08 05:40:24.851512 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:40:24.851518 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:40:24.851524 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:40:24.851531 | orchestrator | 2026-02-08 05:40:24.851537 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-08 05:40:24.851543 | orchestrator | Sunday 08 February 2026 05:39:52 +0000 (0:00:02.309) 0:02:57.026 ******* 2026-02-08 05:40:24.851549 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:40:24.851555 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:40:24.851563 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:40:24.851573 | orchestrator | 2026-02-08 05:40:24.851582 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-08 05:40:24.851594 | orchestrator | Sunday 08 February 2026 05:39:59 +0000 (0:00:06.760) 0:03:03.787 ******* 2026-02-08 05:40:24.851602 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:40:24.851611 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:40:24.851621 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:40:24.851632 | orchestrator | 2026-02-08 05:40:24.851641 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-08 05:40:24.851650 | orchestrator | Sunday 08 February 2026 05:40:01 +0000 (0:00:02.733) 0:03:06.521 ******* 2026-02-08 05:40:24.851660 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:40:24.851670 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:40:24.851680 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:40:24.851690 | orchestrator | 2026-02-08 05:40:24.851699 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-08 05:40:24.851710 | orchestrator | Sunday 08 February 2026 05:40:02 +0000 (0:00:00.831) 0:03:07.352 ******* 2026-02-08 05:40:24.851718 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:40:24.851725 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:40:24.851730 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:40:24.851736 | orchestrator | 2026-02-08 05:40:24.851742 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-08 05:40:24.851748 | orchestrator | Sunday 08 February 2026 05:40:05 +0000 (0:00:02.710) 0:03:10.062 ******* 2026-02-08 05:40:24.851754 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:40:24.851760 | orchestrator | 2026-02-08 05:40:24.851766 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-08 05:40:24.851771 | orchestrator | Sunday 08 February 2026 05:40:06 +0000 (0:00:01.238) 0:03:11.301 ******* 2026-02-08 05:40:24.851777 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:40:24.851783 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:40:24.851789 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:40:24.851795 | orchestrator | 2026-02-08 05:40:24.851801 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:40:24.851827 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-08 05:40:24.851835 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-08 05:40:24.851842 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-08 05:40:24.851849 | orchestrator | 2026-02-08 05:40:24.851858 | orchestrator | 2026-02-08 05:40:24.851869 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:40:24.851878 | orchestrator | Sunday 08 February 2026 05:40:24 +0000 (0:00:17.673) 0:03:28.975 ******* 2026-02-08 05:40:24.851889 | orchestrator | =============================================================================== 2026-02-08 05:40:24.851898 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 77.34s 2026-02-08 05:40:24.851908 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.67s 2026-02-08 05:40:24.851917 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 16.86s 2026-02-08 05:40:24.851927 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 9.08s 2026-02-08 05:40:24.851960 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.76s 2026-02-08 05:40:24.851970 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.22s 2026-02-08 05:40:24.851980 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.73s 2026-02-08 05:40:24.851990 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.64s 2026-02-08 05:40:24.852000 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.64s 2026-02-08 05:40:24.852008 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.36s 2026-02-08 05:40:24.852035 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.30s 2026-02-08 05:40:24.852047 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.74s 2026-02-08 05:40:24.852058 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 2.73s 2026-02-08 05:40:24.852067 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.71s 2026-02-08 05:40:24.852077 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.62s 2026-02-08 05:40:24.852087 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.61s 2026-02-08 05:40:24.852096 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.50s 2026-02-08 05:40:24.852113 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 2.48s 2026-02-08 05:40:24.852123 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.32s 2026-02-08 05:40:24.852133 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.31s 2026-02-08 05:40:25.182481 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-08 05:40:27.243591 | orchestrator | 2026-02-08 05:40:27 | INFO  | Task 2c2824e0-1cea-4ec0-b76e-6fe82e19037a (rabbitmq) was prepared for execution. 2026-02-08 05:40:27.243688 | orchestrator | 2026-02-08 05:40:27 | INFO  | It takes a moment until task 2c2824e0-1cea-4ec0-b76e-6fe82e19037a (rabbitmq) has been started and output is visible here. 2026-02-08 05:41:12.607024 | orchestrator | 2026-02-08 05:41:12.607142 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 05:41:12.607159 | orchestrator | 2026-02-08 05:41:12.607172 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 05:41:12.607184 | orchestrator | Sunday 08 February 2026 05:40:33 +0000 (0:00:01.600) 0:00:01.600 ******* 2026-02-08 05:41:12.607195 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:41:12.607231 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:41:12.607243 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:41:12.607253 | orchestrator | 2026-02-08 05:41:12.607264 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 05:41:12.607275 | orchestrator | Sunday 08 February 2026 05:40:35 +0000 (0:00:01.843) 0:00:03.444 ******* 2026-02-08 05:41:12.607286 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-08 05:41:12.607298 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-08 05:41:12.607309 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-08 05:41:12.607320 | orchestrator | 2026-02-08 05:41:12.607330 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-08 05:41:12.607341 | orchestrator | 2026-02-08 05:41:12.607353 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-08 05:41:12.607364 | orchestrator | Sunday 08 February 2026 05:40:36 +0000 (0:00:01.955) 0:00:05.400 ******* 2026-02-08 05:41:12.607376 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:41:12.607388 | orchestrator | 2026-02-08 05:41:12.607399 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-08 05:41:12.607409 | orchestrator | Sunday 08 February 2026 05:40:39 +0000 (0:00:02.733) 0:00:08.133 ******* 2026-02-08 05:41:12.607420 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:41:12.607431 | orchestrator | 2026-02-08 05:41:12.607442 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-08 05:41:12.607453 | orchestrator | Sunday 08 February 2026 05:40:42 +0000 (0:00:02.335) 0:00:10.469 ******* 2026-02-08 05:41:12.607463 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:41:12.607474 | orchestrator | 2026-02-08 05:41:12.607485 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-08 05:41:12.607496 | orchestrator | Sunday 08 February 2026 05:40:45 +0000 (0:00:03.334) 0:00:13.803 ******* 2026-02-08 05:41:12.607507 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:41:12.607519 | orchestrator | 2026-02-08 05:41:12.607529 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-08 05:41:12.607540 | orchestrator | Sunday 08 February 2026 05:40:55 +0000 (0:00:10.336) 0:00:24.140 ******* 2026-02-08 05:41:12.607551 | orchestrator | ok: [testbed-node-0] => { 2026-02-08 05:41:12.607562 | orchestrator |  "changed": false, 2026-02-08 05:41:12.607573 | orchestrator |  "msg": "All assertions passed" 2026-02-08 05:41:12.607584 | orchestrator | } 2026-02-08 05:41:12.607595 | orchestrator | 2026-02-08 05:41:12.607606 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-08 05:41:12.607616 | orchestrator | Sunday 08 February 2026 05:40:57 +0000 (0:00:01.350) 0:00:25.491 ******* 2026-02-08 05:41:12.607627 | orchestrator | ok: [testbed-node-0] => { 2026-02-08 05:41:12.607638 | orchestrator |  "changed": false, 2026-02-08 05:41:12.607649 | orchestrator |  "msg": "All assertions passed" 2026-02-08 05:41:12.607659 | orchestrator | } 2026-02-08 05:41:12.607670 | orchestrator | 2026-02-08 05:41:12.607681 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-08 05:41:12.607692 | orchestrator | Sunday 08 February 2026 05:40:58 +0000 (0:00:01.657) 0:00:27.149 ******* 2026-02-08 05:41:12.607703 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:41:12.607713 | orchestrator | 2026-02-08 05:41:12.607724 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-08 05:41:12.607735 | orchestrator | Sunday 08 February 2026 05:41:00 +0000 (0:00:01.793) 0:00:28.942 ******* 2026-02-08 05:41:12.607745 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:41:12.607756 | orchestrator | 2026-02-08 05:41:12.607767 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-08 05:41:12.607777 | orchestrator | Sunday 08 February 2026 05:41:02 +0000 (0:00:02.239) 0:00:31.182 ******* 2026-02-08 05:41:12.607796 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:41:12.607807 | orchestrator | 2026-02-08 05:41:12.607818 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-08 05:41:12.607828 | orchestrator | Sunday 08 February 2026 05:41:06 +0000 (0:00:03.322) 0:00:34.504 ******* 2026-02-08 05:41:12.607839 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:41:12.607850 | orchestrator | 2026-02-08 05:41:12.607861 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-08 05:41:12.607871 | orchestrator | Sunday 08 February 2026 05:41:08 +0000 (0:00:01.935) 0:00:36.440 ******* 2026-02-08 05:41:12.607922 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:12.607939 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:12.607979 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:12.607995 | orchestrator | 2026-02-08 05:41:12.608006 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-08 05:41:12.608017 | orchestrator | Sunday 08 February 2026 05:41:09 +0000 (0:00:01.943) 0:00:38.384 ******* 2026-02-08 05:41:12.608042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:12.608064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:32.037937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:32.038195 | orchestrator | 2026-02-08 05:41:32.038221 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-08 05:41:32.038236 | orchestrator | Sunday 08 February 2026 05:41:12 +0000 (0:00:02.623) 0:00:41.008 ******* 2026-02-08 05:41:32.038250 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-08 05:41:32.038265 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-08 05:41:32.038277 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-08 05:41:32.038291 | orchestrator | 2026-02-08 05:41:32.038304 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-08 05:41:32.038340 | orchestrator | Sunday 08 February 2026 05:41:14 +0000 (0:00:02.407) 0:00:43.415 ******* 2026-02-08 05:41:32.038354 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-08 05:41:32.038368 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-08 05:41:32.038381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-08 05:41:32.038394 | orchestrator | 2026-02-08 05:41:32.038407 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-08 05:41:32.038420 | orchestrator | Sunday 08 February 2026 05:41:18 +0000 (0:00:03.013) 0:00:46.428 ******* 2026-02-08 05:41:32.038433 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-08 05:41:32.038447 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-08 05:41:32.038459 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-08 05:41:32.038472 | orchestrator | 2026-02-08 05:41:32.038485 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-08 05:41:32.038498 | orchestrator | Sunday 08 February 2026 05:41:20 +0000 (0:00:02.281) 0:00:48.710 ******* 2026-02-08 05:41:32.038511 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-08 05:41:32.038523 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-08 05:41:32.038536 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-08 05:41:32.038549 | orchestrator | 2026-02-08 05:41:32.038571 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-08 05:41:32.038585 | orchestrator | Sunday 08 February 2026 05:41:22 +0000 (0:00:02.408) 0:00:51.119 ******* 2026-02-08 05:41:32.038599 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-08 05:41:32.038612 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-08 05:41:32.038622 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-08 05:41:32.038630 | orchestrator | 2026-02-08 05:41:32.038638 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-08 05:41:32.038646 | orchestrator | Sunday 08 February 2026 05:41:25 +0000 (0:00:02.370) 0:00:53.490 ******* 2026-02-08 05:41:32.038653 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-08 05:41:32.038661 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-08 05:41:32.038669 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-08 05:41:32.038676 | orchestrator | 2026-02-08 05:41:32.038684 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-08 05:41:32.038692 | orchestrator | Sunday 08 February 2026 05:41:27 +0000 (0:00:02.615) 0:00:56.106 ******* 2026-02-08 05:41:32.038700 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:41:32.038708 | orchestrator | 2026-02-08 05:41:32.038734 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-08 05:41:32.038742 | orchestrator | Sunday 08 February 2026 05:41:29 +0000 (0:00:01.766) 0:00:57.873 ******* 2026-02-08 05:41:32.038752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:32.038775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:32.038796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:32.038810 | orchestrator | 2026-02-08 05:41:32.038824 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-08 05:41:32.038837 | orchestrator | Sunday 08 February 2026 05:41:31 +0000 (0:00:02.328) 0:01:00.202 ******* 2026-02-08 05:41:32.038863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:41:41.239296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:41:41.239411 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:41:41.239429 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:41:41.239443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:41:41.239456 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:41:41.239468 | orchestrator | 2026-02-08 05:41:41.239481 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-08 05:41:41.239510 | orchestrator | Sunday 08 February 2026 05:41:33 +0000 (0:00:01.502) 0:01:01.705 ******* 2026-02-08 05:41:41.239526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:41:41.239572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:41:41.239625 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:41:41.239648 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:41:41.239668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:41:41.239688 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:41:41.239705 | orchestrator | 2026-02-08 05:41:41.239724 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-08 05:41:41.239743 | orchestrator | Sunday 08 February 2026 05:41:35 +0000 (0:00:01.814) 0:01:03.520 ******* 2026-02-08 05:41:41.239763 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:41:41.239782 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:41:41.239800 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:41:41.239818 | orchestrator | 2026-02-08 05:41:41.239836 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-08 05:41:41.239856 | orchestrator | Sunday 08 February 2026 05:41:39 +0000 (0:00:03.929) 0:01:07.449 ******* 2026-02-08 05:41:41.239888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:41:41.239924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:43:28.794408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-08 05:43:28.794554 | orchestrator | 2026-02-08 05:43:28.794583 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-08 05:43:28.794603 | orchestrator | Sunday 08 February 2026 05:41:41 +0000 (0:00:02.198) 0:01:09.647 ******* 2026-02-08 05:43:28.794622 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:43:28.794639 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:43:28.794655 | orchestrator | } 2026-02-08 05:43:28.794671 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:43:28.794687 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:43:28.794703 | orchestrator | } 2026-02-08 05:43:28.794718 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:43:28.794735 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:43:28.794750 | orchestrator | } 2026-02-08 05:43:28.794766 | orchestrator | 2026-02-08 05:43:28.794783 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:43:28.794799 | orchestrator | Sunday 08 February 2026 05:41:42 +0000 (0:00:01.476) 0:01:11.124 ******* 2026-02-08 05:43:28.794839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:43:28.794888 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:43:28.794908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:43:28.794924 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:43:28.794970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-08 05:43:28.795025 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:43:28.795044 | orchestrator | 2026-02-08 05:43:28.795061 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-08 05:43:28.795078 | orchestrator | Sunday 08 February 2026 05:41:44 +0000 (0:00:02.060) 0:01:13.185 ******* 2026-02-08 05:43:28.795094 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:43:28.795109 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:43:28.795125 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:43:28.795141 | orchestrator | 2026-02-08 05:43:28.795157 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-08 05:43:28.795175 | orchestrator | 2026-02-08 05:43:28.795191 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-08 05:43:28.795208 | orchestrator | Sunday 08 February 2026 05:41:46 +0000 (0:00:02.068) 0:01:15.253 ******* 2026-02-08 05:43:28.795225 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:43:28.795242 | orchestrator | 2026-02-08 05:43:28.795259 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-08 05:43:28.795277 | orchestrator | Sunday 08 February 2026 05:41:49 +0000 (0:00:02.240) 0:01:17.494 ******* 2026-02-08 05:43:28.795292 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:43:28.795308 | orchestrator | 2026-02-08 05:43:28.795323 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-08 05:43:28.795339 | orchestrator | Sunday 08 February 2026 05:41:58 +0000 (0:00:09.768) 0:01:27.262 ******* 2026-02-08 05:43:28.795373 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:43:28.795390 | orchestrator | 2026-02-08 05:43:28.795406 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-08 05:43:28.795422 | orchestrator | Sunday 08 February 2026 05:42:07 +0000 (0:00:09.143) 0:01:36.406 ******* 2026-02-08 05:43:28.795438 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:43:28.795453 | orchestrator | 2026-02-08 05:43:28.795469 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-08 05:43:28.795485 | orchestrator | 2026-02-08 05:43:28.795501 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-08 05:43:28.795517 | orchestrator | Sunday 08 February 2026 05:42:18 +0000 (0:00:10.562) 0:01:46.968 ******* 2026-02-08 05:43:28.795533 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:43:28.795548 | orchestrator | 2026-02-08 05:43:28.795564 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-08 05:43:28.795581 | orchestrator | Sunday 08 February 2026 05:42:20 +0000 (0:00:01.653) 0:01:48.622 ******* 2026-02-08 05:43:28.795597 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:43:28.795613 | orchestrator | 2026-02-08 05:43:28.795626 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-08 05:43:28.795637 | orchestrator | Sunday 08 February 2026 05:42:29 +0000 (0:00:08.852) 0:01:57.475 ******* 2026-02-08 05:43:28.795646 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:43:28.795656 | orchestrator | 2026-02-08 05:43:28.795666 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-08 05:43:28.795675 | orchestrator | Sunday 08 February 2026 05:42:42 +0000 (0:00:13.789) 0:02:11.264 ******* 2026-02-08 05:43:28.795685 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:43:28.795695 | orchestrator | 2026-02-08 05:43:28.795705 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-08 05:43:28.795714 | orchestrator | 2026-02-08 05:43:28.795724 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-08 05:43:28.795734 | orchestrator | Sunday 08 February 2026 05:42:52 +0000 (0:00:09.532) 0:02:20.796 ******* 2026-02-08 05:43:28.795743 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:43:28.795753 | orchestrator | 2026-02-08 05:43:28.795762 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-08 05:43:28.795772 | orchestrator | Sunday 08 February 2026 05:42:54 +0000 (0:00:01.695) 0:02:22.491 ******* 2026-02-08 05:43:28.795782 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:43:28.795791 | orchestrator | 2026-02-08 05:43:28.795801 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-08 05:43:28.795811 | orchestrator | Sunday 08 February 2026 05:43:03 +0000 (0:00:09.624) 0:02:32.116 ******* 2026-02-08 05:43:28.795821 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:43:28.795830 | orchestrator | 2026-02-08 05:43:28.795840 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-08 05:43:28.795849 | orchestrator | Sunday 08 February 2026 05:43:17 +0000 (0:00:14.247) 0:02:46.363 ******* 2026-02-08 05:43:28.795859 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:43:28.795868 | orchestrator | 2026-02-08 05:43:28.795878 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-08 05:43:28.795888 | orchestrator | 2026-02-08 05:43:28.795897 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-08 05:43:28.795920 | orchestrator | Sunday 08 February 2026 05:43:28 +0000 (0:00:10.836) 0:02:57.199 ******* 2026-02-08 05:43:35.647626 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:43:35.647747 | orchestrator | 2026-02-08 05:43:35.647783 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-08 05:43:35.647803 | orchestrator | Sunday 08 February 2026 05:43:30 +0000 (0:00:01.563) 0:02:58.763 ******* 2026-02-08 05:43:35.647822 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:43:35.647841 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:43:35.647891 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:43:35.647912 | orchestrator | 2026-02-08 05:43:35.647931 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:43:35.647950 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 05:43:35.647970 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 05:43:35.647989 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-08 05:43:35.648085 | orchestrator | 2026-02-08 05:43:35.648097 | orchestrator | 2026-02-08 05:43:35.648108 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:43:35.648119 | orchestrator | Sunday 08 February 2026 05:43:35 +0000 (0:00:04.859) 0:03:03.622 ******* 2026-02-08 05:43:35.648138 | orchestrator | =============================================================================== 2026-02-08 05:43:35.648157 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.18s 2026-02-08 05:43:35.648176 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 30.93s 2026-02-08 05:43:35.648194 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 28.25s 2026-02-08 05:43:35.648212 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------ 10.34s 2026-02-08 05:43:35.648230 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.59s 2026-02-08 05:43:35.648249 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.86s 2026-02-08 05:43:35.648404 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.93s 2026-02-08 05:43:35.648444 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.33s 2026-02-08 05:43:35.648463 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.32s 2026-02-08 05:43:35.648482 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.01s 2026-02-08 05:43:35.648502 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.73s 2026-02-08 05:43:35.648520 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.62s 2026-02-08 05:43:35.648538 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.62s 2026-02-08 05:43:35.648648 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.41s 2026-02-08 05:43:35.648673 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.41s 2026-02-08 05:43:35.648687 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.37s 2026-02-08 05:43:35.648707 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.34s 2026-02-08 05:43:35.648726 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.33s 2026-02-08 05:43:35.648744 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.28s 2026-02-08 05:43:35.648763 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.24s 2026-02-08 05:43:35.973954 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-08 05:43:38.067670 | orchestrator | 2026-02-08 05:43:38 | INFO  | Task 4736e294-9e74-4e0b-a890-7987284c2db8 (openvswitch) was prepared for execution. 2026-02-08 05:43:38.067764 | orchestrator | 2026-02-08 05:43:38 | INFO  | It takes a moment until task 4736e294-9e74-4e0b-a890-7987284c2db8 (openvswitch) has been started and output is visible here. 2026-02-08 05:44:04.949576 | orchestrator | 2026-02-08 05:44:04.949687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 05:44:04.949703 | orchestrator | 2026-02-08 05:44:04.949714 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 05:44:04.949725 | orchestrator | Sunday 08 February 2026 05:43:43 +0000 (0:00:01.405) 0:00:01.405 ******* 2026-02-08 05:44:04.949760 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:44:04.949771 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:44:04.949781 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:44:04.949790 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:44:04.949800 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:44:04.949809 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:44:04.949819 | orchestrator | 2026-02-08 05:44:04.949829 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 05:44:04.949839 | orchestrator | Sunday 08 February 2026 05:43:46 +0000 (0:00:03.076) 0:00:04.482 ******* 2026-02-08 05:44:04.949849 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 05:44:04.949859 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 05:44:04.949869 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 05:44:04.949878 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 05:44:04.949888 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 05:44:04.949898 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-08 05:44:04.949907 | orchestrator | 2026-02-08 05:44:04.949917 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-08 05:44:04.949927 | orchestrator | 2026-02-08 05:44:04.949937 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-08 05:44:04.949946 | orchestrator | Sunday 08 February 2026 05:43:49 +0000 (0:00:02.225) 0:00:06.707 ******* 2026-02-08 05:44:04.949957 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:44:04.949968 | orchestrator | 2026-02-08 05:44:04.949978 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-08 05:44:04.949988 | orchestrator | Sunday 08 February 2026 05:43:51 +0000 (0:00:02.551) 0:00:09.258 ******* 2026-02-08 05:44:04.949998 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-08 05:44:04.950113 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-08 05:44:04.950128 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-08 05:44:04.950140 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-08 05:44:04.950161 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-08 05:44:04.950172 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-08 05:44:04.950184 | orchestrator | 2026-02-08 05:44:04.950196 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-08 05:44:04.950207 | orchestrator | Sunday 08 February 2026 05:43:54 +0000 (0:00:02.979) 0:00:12.238 ******* 2026-02-08 05:44:04.950219 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-08 05:44:04.950230 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-08 05:44:04.950241 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-08 05:44:04.950252 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-08 05:44:04.950264 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-08 05:44:04.950275 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-08 05:44:04.950287 | orchestrator | 2026-02-08 05:44:04.950298 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-08 05:44:04.950309 | orchestrator | Sunday 08 February 2026 05:43:57 +0000 (0:00:02.706) 0:00:14.945 ******* 2026-02-08 05:44:04.950321 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-08 05:44:04.950333 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:44:04.950345 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-08 05:44:04.950356 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:44:04.950368 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-08 05:44:04.950389 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:44:04.950401 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-08 05:44:04.950412 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:44:04.950423 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-08 05:44:04.950434 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:44:04.950459 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-08 05:44:04.950471 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:44:04.950484 | orchestrator | 2026-02-08 05:44:04.950494 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-08 05:44:04.950504 | orchestrator | Sunday 08 February 2026 05:44:00 +0000 (0:00:02.658) 0:00:17.603 ******* 2026-02-08 05:44:04.950514 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:44:04.950523 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:44:04.950533 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:44:04.950543 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:44:04.950552 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:44:04.950562 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:44:04.950571 | orchestrator | 2026-02-08 05:44:04.950581 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-08 05:44:04.950591 | orchestrator | Sunday 08 February 2026 05:44:02 +0000 (0:00:02.106) 0:00:19.710 ******* 2026-02-08 05:44:04.950622 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:04.950638 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:04.950649 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:04.950659 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:04.950682 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:04.950693 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:04.950711 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224651 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224756 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224797 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224826 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224838 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224850 | orchestrator | 2026-02-08 05:44:07.224868 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-08 05:44:07.224891 | orchestrator | Sunday 08 February 2026 05:44:04 +0000 (0:00:02.765) 0:00:22.476 ******* 2026-02-08 05:44:07.224923 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224936 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224948 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224984 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:07.224997 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:07.225054 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:07.225077 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:13.138922 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139103 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139134 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139143 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139153 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139168 | orchestrator | 2026-02-08 05:44:13.139185 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-08 05:44:13.139199 | orchestrator | Sunday 08 February 2026 05:44:08 +0000 (0:00:03.632) 0:00:26.109 ******* 2026-02-08 05:44:13.139213 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:44:13.139228 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:44:13.139242 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:44:13.139254 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:44:13.139269 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:44:13.139282 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:44:13.139294 | orchestrator | 2026-02-08 05:44:13.139308 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-08 05:44:13.139353 | orchestrator | Sunday 08 February 2026 05:44:11 +0000 (0:00:02.625) 0:00:28.734 ******* 2026-02-08 05:44:13.139364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:13.139422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-08 05:44:17.201777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:17.201880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:17.201914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:17.201928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:17.201940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:17.202009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-08 05:44:17.202200 | orchestrator | 2026-02-08 05:44:17.202224 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-08 05:44:17.202246 | orchestrator | Sunday 08 February 2026 05:44:14 +0000 (0:00:03.404) 0:00:32.139 ******* 2026-02-08 05:44:17.202267 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:44:17.202281 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:44:17.202292 | orchestrator | } 2026-02-08 05:44:17.202303 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:44:17.202315 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:44:17.202326 | orchestrator | } 2026-02-08 05:44:17.202336 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:44:17.202347 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:44:17.202358 | orchestrator | } 2026-02-08 05:44:17.202375 | orchestrator | changed: [testbed-node-3] => { 2026-02-08 05:44:17.202393 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:44:17.202411 | orchestrator | } 2026-02-08 05:44:17.202431 | orchestrator | changed: [testbed-node-4] => { 2026-02-08 05:44:17.202444 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:44:17.202455 | orchestrator | } 2026-02-08 05:44:17.202466 | orchestrator | changed: [testbed-node-5] => { 2026-02-08 05:44:17.202477 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:44:17.202487 | orchestrator | } 2026-02-08 05:44:17.202498 | orchestrator | 2026-02-08 05:44:17.202509 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:44:17.202520 | orchestrator | Sunday 08 February 2026 05:44:16 +0000 (0:00:02.086) 0:00:34.226 ******* 2026-02-08 05:44:17.202541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-08 05:44:17.202555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-08 05:44:17.202578 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:44:17.202590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-08 05:44:17.202602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-08 05:44:17.202625 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:44:48.345665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-08 05:44:48.345787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-08 05:44:48.345806 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:44:48.345838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-08 05:44:48.345852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-08 05:44:48.345887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-08 05:44:48.345920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-08 05:44:48.345932 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:44:48.345944 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:44:48.345956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-08 05:44:48.345974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-08 05:44:48.345986 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:44:48.345997 | orchestrator | 2026-02-08 05:44:48.346010 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 05:44:48.346115 | orchestrator | Sunday 08 February 2026 05:44:19 +0000 (0:00:02.767) 0:00:36.994 ******* 2026-02-08 05:44:48.346138 | orchestrator | 2026-02-08 05:44:48.346150 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 05:44:48.346161 | orchestrator | Sunday 08 February 2026 05:44:19 +0000 (0:00:00.524) 0:00:37.518 ******* 2026-02-08 05:44:48.346172 | orchestrator | 2026-02-08 05:44:48.346182 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 05:44:48.346195 | orchestrator | Sunday 08 February 2026 05:44:20 +0000 (0:00:00.524) 0:00:38.043 ******* 2026-02-08 05:44:48.346208 | orchestrator | 2026-02-08 05:44:48.346221 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 05:44:48.346233 | orchestrator | Sunday 08 February 2026 05:44:21 +0000 (0:00:00.507) 0:00:38.551 ******* 2026-02-08 05:44:48.346246 | orchestrator | 2026-02-08 05:44:48.346257 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 05:44:48.346268 | orchestrator | Sunday 08 February 2026 05:44:21 +0000 (0:00:00.753) 0:00:39.304 ******* 2026-02-08 05:44:48.346278 | orchestrator | 2026-02-08 05:44:48.346289 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-08 05:44:48.346300 | orchestrator | Sunday 08 February 2026 05:44:22 +0000 (0:00:00.523) 0:00:39.828 ******* 2026-02-08 05:44:48.346311 | orchestrator | 2026-02-08 05:44:48.346322 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-08 05:44:48.346333 | orchestrator | Sunday 08 February 2026 05:44:23 +0000 (0:00:00.860) 0:00:40.688 ******* 2026-02-08 05:44:48.346344 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:44:48.346355 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:44:48.346367 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:44:48.346377 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:44:48.346388 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:44:48.346399 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:44:48.346409 | orchestrator | 2026-02-08 05:44:48.346421 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-08 05:44:48.346432 | orchestrator | Sunday 08 February 2026 05:44:34 +0000 (0:00:11.818) 0:00:52.507 ******* 2026-02-08 05:44:48.346443 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:44:48.346455 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:44:48.346466 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:44:48.346476 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:44:48.346487 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:44:48.346498 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:44:48.346509 | orchestrator | 2026-02-08 05:44:48.346519 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-08 05:44:48.346530 | orchestrator | Sunday 08 February 2026 05:44:37 +0000 (0:00:02.188) 0:00:54.695 ******* 2026-02-08 05:44:48.346541 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:44:48.346552 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:44:48.346563 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:44:48.346573 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:44:48.346584 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:44:48.346595 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:44:48.346605 | orchestrator | 2026-02-08 05:44:48.346616 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-08 05:44:48.346635 | orchestrator | Sunday 08 February 2026 05:44:48 +0000 (0:00:11.175) 0:01:05.871 ******* 2026-02-08 05:45:03.944016 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-08 05:45:03.944198 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-08 05:45:03.944221 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-08 05:45:03.944243 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-08 05:45:03.944256 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-08 05:45:03.944294 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-08 05:45:03.944306 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-08 05:45:03.944325 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-08 05:45:03.944342 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-08 05:45:03.944358 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-08 05:45:03.944374 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-08 05:45:03.944391 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-08 05:45:03.944406 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 05:45:03.944440 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 05:45:03.944457 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 05:45:03.944472 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 05:45:03.944488 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 05:45:03.944505 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-08 05:45:03.944521 | orchestrator | 2026-02-08 05:45:03.944539 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-08 05:45:03.944558 | orchestrator | Sunday 08 February 2026 05:44:55 +0000 (0:00:07.463) 0:01:13.334 ******* 2026-02-08 05:45:03.944576 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-08 05:45:03.944596 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:45:03.944618 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-08 05:45:03.944633 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:45:03.944645 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-08 05:45:03.944658 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:45:03.944670 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-08 05:45:03.944681 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-08 05:45:03.944692 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-08 05:45:03.944704 | orchestrator | 2026-02-08 05:45:03.944716 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-08 05:45:03.944728 | orchestrator | Sunday 08 February 2026 05:44:59 +0000 (0:00:03.265) 0:01:16.600 ******* 2026-02-08 05:45:03.944740 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-08 05:45:03.944751 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:45:03.944763 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-08 05:45:03.944774 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:45:03.944786 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-08 05:45:03.944797 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:45:03.944808 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-08 05:45:03.944820 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-08 05:45:03.944833 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-08 05:45:03.944850 | orchestrator | 2026-02-08 05:45:03.944867 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:45:03.944883 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 05:45:03.944917 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 05:45:03.944936 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-08 05:45:03.944953 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:45:03.944993 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:45:03.945011 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:45:03.945027 | orchestrator | 2026-02-08 05:45:03.945042 | orchestrator | 2026-02-08 05:45:03.945060 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:45:03.945107 | orchestrator | Sunday 08 February 2026 05:45:03 +0000 (0:00:04.338) 0:01:20.938 ******* 2026-02-08 05:45:03.945125 | orchestrator | =============================================================================== 2026-02-08 05:45:03.945142 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.82s 2026-02-08 05:45:03.945152 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.18s 2026-02-08 05:45:03.945162 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.46s 2026-02-08 05:45:03.945172 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.34s 2026-02-08 05:45:03.945181 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.69s 2026-02-08 05:45:03.945191 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.63s 2026-02-08 05:45:03.945200 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.40s 2026-02-08 05:45:03.945210 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.27s 2026-02-08 05:45:03.945220 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.08s 2026-02-08 05:45:03.945229 | orchestrator | module-load : Load modules ---------------------------------------------- 2.98s 2026-02-08 05:45:03.945239 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.77s 2026-02-08 05:45:03.945249 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.77s 2026-02-08 05:45:03.945266 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.71s 2026-02-08 05:45:03.945276 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.66s 2026-02-08 05:45:03.945286 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.63s 2026-02-08 05:45:03.945296 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.55s 2026-02-08 05:45:03.945305 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.23s 2026-02-08 05:45:03.945315 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.19s 2026-02-08 05:45:03.945325 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.11s 2026-02-08 05:45:03.945335 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.09s 2026-02-08 05:45:04.312520 | orchestrator | + osism apply -a upgrade ovn 2026-02-08 05:45:06.462871 | orchestrator | 2026-02-08 05:45:06 | INFO  | Task a1a47052-adcf-47b1-9d69-4448cd461821 (ovn) was prepared for execution. 2026-02-08 05:45:06.462969 | orchestrator | 2026-02-08 05:45:06 | INFO  | It takes a moment until task a1a47052-adcf-47b1-9d69-4448cd461821 (ovn) has been started and output is visible here. 2026-02-08 05:45:29.811451 | orchestrator | 2026-02-08 05:45:29.811532 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-08 05:45:29.811539 | orchestrator | 2026-02-08 05:45:29.811545 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-08 05:45:29.811549 | orchestrator | Sunday 08 February 2026 05:45:12 +0000 (0:00:01.828) 0:00:01.828 ******* 2026-02-08 05:45:29.811553 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:45:29.811558 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:45:29.811562 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:45:29.811566 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:45:29.811570 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:45:29.811574 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:45:29.811578 | orchestrator | 2026-02-08 05:45:29.811582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-08 05:45:29.811586 | orchestrator | Sunday 08 February 2026 05:45:16 +0000 (0:00:03.627) 0:00:05.456 ******* 2026-02-08 05:45:29.811590 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-08 05:45:29.811595 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-08 05:45:29.811598 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-08 05:45:29.811602 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-08 05:45:29.811606 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-08 05:45:29.811610 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-08 05:45:29.811613 | orchestrator | 2026-02-08 05:45:29.811617 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-08 05:45:29.811621 | orchestrator | 2026-02-08 05:45:29.811625 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-08 05:45:29.811629 | orchestrator | Sunday 08 February 2026 05:45:18 +0000 (0:00:02.399) 0:00:07.856 ******* 2026-02-08 05:45:29.811633 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:45:29.811638 | orchestrator | 2026-02-08 05:45:29.811642 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-08 05:45:29.811646 | orchestrator | Sunday 08 February 2026 05:45:22 +0000 (0:00:03.587) 0:00:11.444 ******* 2026-02-08 05:45:29.811651 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811657 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811661 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811675 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811695 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811708 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811712 | orchestrator | 2026-02-08 05:45:29.811716 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-08 05:45:29.811720 | orchestrator | Sunday 08 February 2026 05:45:24 +0000 (0:00:02.413) 0:00:13.857 ******* 2026-02-08 05:45:29.811724 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811728 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811732 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811736 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811740 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811744 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811754 | orchestrator | 2026-02-08 05:45:29.811758 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-08 05:45:29.811761 | orchestrator | Sunday 08 February 2026 05:45:27 +0000 (0:00:02.615) 0:00:16.473 ******* 2026-02-08 05:45:29.811768 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811772 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:29.811779 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646392 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646499 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646516 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646527 | orchestrator | 2026-02-08 05:45:37.646539 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-08 05:45:37.646549 | orchestrator | Sunday 08 February 2026 05:45:29 +0000 (0:00:02.411) 0:00:18.884 ******* 2026-02-08 05:45:37.646558 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646564 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646601 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646611 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646621 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646648 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646657 | orchestrator | 2026-02-08 05:45:37.646667 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-08 05:45:37.646677 | orchestrator | Sunday 08 February 2026 05:45:32 +0000 (0:00:03.076) 0:00:21.960 ******* 2026-02-08 05:45:37.646688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:45:37.646762 | orchestrator | 2026-02-08 05:45:37.646772 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-08 05:45:37.646783 | orchestrator | Sunday 08 February 2026 05:45:35 +0000 (0:00:02.626) 0:00:24.587 ******* 2026-02-08 05:45:37.646793 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:45:37.646804 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:45:37.646812 | orchestrator | } 2026-02-08 05:45:37.646818 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:45:37.646827 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:45:37.646836 | orchestrator | } 2026-02-08 05:45:37.646845 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:45:37.646855 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:45:37.646864 | orchestrator | } 2026-02-08 05:45:37.646874 | orchestrator | changed: [testbed-node-3] => { 2026-02-08 05:45:37.646884 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:45:37.646893 | orchestrator | } 2026-02-08 05:45:37.646902 | orchestrator | changed: [testbed-node-4] => { 2026-02-08 05:45:37.646910 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:45:37.646919 | orchestrator | } 2026-02-08 05:45:37.646929 | orchestrator | changed: [testbed-node-5] => { 2026-02-08 05:45:37.646938 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:45:37.646948 | orchestrator | } 2026-02-08 05:45:37.646958 | orchestrator | 2026-02-08 05:45:37.646968 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:45:37.646979 | orchestrator | Sunday 08 February 2026 05:45:37 +0000 (0:00:02.009) 0:00:26.596 ******* 2026-02-08 05:45:37.646998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:46:09.094863 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:46:09.094975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:46:09.094994 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:46:09.095006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:46:09.095044 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:46:09.095054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:46:09.095064 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:46:09.095074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:46:09.095084 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:46:09.095094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:46:09.095103 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:46:09.095114 | orchestrator | 2026-02-08 05:46:09.095187 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-08 05:46:09.095201 | orchestrator | Sunday 08 February 2026 05:45:40 +0000 (0:00:02.502) 0:00:29.099 ******* 2026-02-08 05:46:09.095211 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:46:09.095222 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:46:09.095231 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:46:09.095240 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:46:09.095249 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:46:09.095259 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:46:09.095267 | orchestrator | 2026-02-08 05:46:09.095278 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-08 05:46:09.095288 | orchestrator | Sunday 08 February 2026 05:45:43 +0000 (0:00:03.761) 0:00:32.861 ******* 2026-02-08 05:46:09.095298 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-08 05:46:09.095309 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-08 05:46:09.095319 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-08 05:46:09.095328 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-08 05:46:09.095337 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-08 05:46:09.095347 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-08 05:46:09.095356 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 05:46:09.095366 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 05:46:09.095375 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 05:46:09.095385 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 05:46:09.095406 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 05:46:09.095434 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-08 05:46:09.095446 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-08 05:46:09.095457 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-08 05:46:09.095467 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-08 05:46:09.095477 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-08 05:46:09.095486 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-08 05:46:09.095495 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-08 05:46:09.095506 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 05:46:09.095518 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 05:46:09.095528 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 05:46:09.095538 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 05:46:09.095549 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 05:46:09.095559 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 05:46:09.095570 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-08 05:46:09.095580 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 05:46:09.095587 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 05:46:09.095594 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 05:46:09.095601 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 05:46:09.095608 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 05:46:09.095616 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 05:46:09.095623 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-08 05:46:09.095630 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 05:46:09.095637 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 05:46:09.095644 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 05:46:09.095658 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-08 05:46:09.095666 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-08 05:46:09.095673 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-08 05:46:09.095680 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-08 05:46:09.095693 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-08 05:46:09.095700 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-08 05:46:09.095709 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-08 05:46:09.095722 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-08 05:46:09.095729 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-08 05:46:09.095736 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-08 05:46:09.095744 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-08 05:46:09.095751 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-08 05:46:09.095765 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-08 05:48:58.092355 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-08 05:48:58.092469 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-08 05:48:58.092487 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-08 05:48:58.092501 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-08 05:48:58.092513 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-08 05:48:58.092525 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-08 05:48:58.092536 | orchestrator | 2026-02-08 05:48:58.092549 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 05:48:58.092561 | orchestrator | Sunday 08 February 2026 05:46:05 +0000 (0:00:22.213) 0:00:55.075 ******* 2026-02-08 05:48:58.092572 | orchestrator | 2026-02-08 05:48:58.092584 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 05:48:58.092594 | orchestrator | Sunday 08 February 2026 05:46:06 +0000 (0:00:00.447) 0:00:55.522 ******* 2026-02-08 05:48:58.092605 | orchestrator | 2026-02-08 05:48:58.092616 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 05:48:58.092627 | orchestrator | Sunday 08 February 2026 05:46:06 +0000 (0:00:00.436) 0:00:55.959 ******* 2026-02-08 05:48:58.092638 | orchestrator | 2026-02-08 05:48:58.092649 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 05:48:58.092660 | orchestrator | Sunday 08 February 2026 05:46:07 +0000 (0:00:00.427) 0:00:56.386 ******* 2026-02-08 05:48:58.092671 | orchestrator | 2026-02-08 05:48:58.092682 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 05:48:58.092693 | orchestrator | Sunday 08 February 2026 05:46:07 +0000 (0:00:00.447) 0:00:56.834 ******* 2026-02-08 05:48:58.092704 | orchestrator | 2026-02-08 05:48:58.092715 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-08 05:48:58.092726 | orchestrator | Sunday 08 February 2026 05:46:08 +0000 (0:00:00.438) 0:00:57.272 ******* 2026-02-08 05:48:58.092737 | orchestrator | 2026-02-08 05:48:58.092748 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-08 05:48:58.092759 | orchestrator | Sunday 08 February 2026 05:46:09 +0000 (0:00:00.842) 0:00:58.114 ******* 2026-02-08 05:48:58.092794 | orchestrator | 2026-02-08 05:48:58.092806 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-08 05:48:58.092817 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:48:58.092829 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:48:58.092840 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:48:58.092852 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:48:58.092866 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:48:58.092879 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:48:58.092892 | orchestrator | 2026-02-08 05:48:58.092905 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-08 05:48:58.092918 | orchestrator | 2026-02-08 05:48:58.092931 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-08 05:48:58.092944 | orchestrator | Sunday 08 February 2026 05:48:21 +0000 (0:02:12.098) 0:03:10.213 ******* 2026-02-08 05:48:58.092973 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:48:58.092986 | orchestrator | 2026-02-08 05:48:58.092999 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-08 05:48:58.093012 | orchestrator | Sunday 08 February 2026 05:48:23 +0000 (0:00:01.915) 0:03:12.129 ******* 2026-02-08 05:48:58.093025 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-08 05:48:58.093038 | orchestrator | 2026-02-08 05:48:58.093051 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-08 05:48:58.093063 | orchestrator | Sunday 08 February 2026 05:48:25 +0000 (0:00:01.973) 0:03:14.102 ******* 2026-02-08 05:48:58.093076 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093090 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093102 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093115 | orchestrator | 2026-02-08 05:48:58.093128 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-08 05:48:58.093140 | orchestrator | Sunday 08 February 2026 05:48:27 +0000 (0:00:02.064) 0:03:16.167 ******* 2026-02-08 05:48:58.093152 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093165 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093178 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093191 | orchestrator | 2026-02-08 05:48:58.093203 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-08 05:48:58.093214 | orchestrator | Sunday 08 February 2026 05:48:28 +0000 (0:00:01.433) 0:03:17.601 ******* 2026-02-08 05:48:58.093225 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093236 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093246 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093281 | orchestrator | 2026-02-08 05:48:58.093295 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-08 05:48:58.093306 | orchestrator | Sunday 08 February 2026 05:48:29 +0000 (0:00:01.475) 0:03:19.076 ******* 2026-02-08 05:48:58.093317 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093328 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093339 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093349 | orchestrator | 2026-02-08 05:48:58.093360 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-08 05:48:58.093371 | orchestrator | Sunday 08 February 2026 05:48:31 +0000 (0:00:01.664) 0:03:20.740 ******* 2026-02-08 05:48:58.093398 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093410 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093421 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093431 | orchestrator | 2026-02-08 05:48:58.093442 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-08 05:48:58.093453 | orchestrator | Sunday 08 February 2026 05:48:32 +0000 (0:00:01.328) 0:03:22.069 ******* 2026-02-08 05:48:58.093464 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:48:58.093476 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:48:58.093487 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:48:58.093498 | orchestrator | 2026-02-08 05:48:58.093517 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-08 05:48:58.093528 | orchestrator | Sunday 08 February 2026 05:48:34 +0000 (0:00:01.419) 0:03:23.488 ******* 2026-02-08 05:48:58.093539 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093550 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093561 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093571 | orchestrator | 2026-02-08 05:48:58.093582 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-08 05:48:58.093593 | orchestrator | Sunday 08 February 2026 05:48:36 +0000 (0:00:01.900) 0:03:25.389 ******* 2026-02-08 05:48:58.093604 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093614 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093625 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093636 | orchestrator | 2026-02-08 05:48:58.093647 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-08 05:48:58.093658 | orchestrator | Sunday 08 February 2026 05:48:37 +0000 (0:00:01.616) 0:03:27.006 ******* 2026-02-08 05:48:58.093669 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093679 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093690 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093701 | orchestrator | 2026-02-08 05:48:58.093712 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-08 05:48:58.093723 | orchestrator | Sunday 08 February 2026 05:48:39 +0000 (0:00:01.861) 0:03:28.867 ******* 2026-02-08 05:48:58.093734 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093744 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093755 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093765 | orchestrator | 2026-02-08 05:48:58.093776 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-08 05:48:58.093787 | orchestrator | Sunday 08 February 2026 05:48:41 +0000 (0:00:01.377) 0:03:30.245 ******* 2026-02-08 05:48:58.093798 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:48:58.093809 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:48:58.093820 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:48:58.093830 | orchestrator | 2026-02-08 05:48:58.093841 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-08 05:48:58.093852 | orchestrator | Sunday 08 February 2026 05:48:42 +0000 (0:00:01.413) 0:03:31.659 ******* 2026-02-08 05:48:58.093863 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:48:58.093874 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:48:58.093885 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:48:58.093896 | orchestrator | 2026-02-08 05:48:58.093907 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-08 05:48:58.093918 | orchestrator | Sunday 08 February 2026 05:48:43 +0000 (0:00:01.409) 0:03:33.068 ******* 2026-02-08 05:48:58.093929 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.093939 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.093950 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.093961 | orchestrator | 2026-02-08 05:48:58.093972 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-08 05:48:58.093983 | orchestrator | Sunday 08 February 2026 05:48:45 +0000 (0:00:01.808) 0:03:34.877 ******* 2026-02-08 05:48:58.093994 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.094005 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.094068 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.094083 | orchestrator | 2026-02-08 05:48:58.094099 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-08 05:48:58.094111 | orchestrator | Sunday 08 February 2026 05:48:47 +0000 (0:00:01.480) 0:03:36.358 ******* 2026-02-08 05:48:58.094121 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.094132 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.094143 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.094164 | orchestrator | 2026-02-08 05:48:58.094184 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-08 05:48:58.094203 | orchestrator | Sunday 08 February 2026 05:48:49 +0000 (0:00:02.130) 0:03:38.489 ******* 2026-02-08 05:48:58.094235 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:48:58.094253 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:48:58.094295 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:48:58.094313 | orchestrator | 2026-02-08 05:48:58.094329 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-08 05:48:58.094346 | orchestrator | Sunday 08 February 2026 05:48:50 +0000 (0:00:01.423) 0:03:39.912 ******* 2026-02-08 05:48:58.094362 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:48:58.094379 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:48:58.094396 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:48:58.094414 | orchestrator | 2026-02-08 05:48:58.094430 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-08 05:48:58.094449 | orchestrator | Sunday 08 February 2026 05:48:52 +0000 (0:00:01.347) 0:03:41.260 ******* 2026-02-08 05:48:58.094468 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:48:58.094487 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:48:58.094505 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:48:58.094518 | orchestrator | 2026-02-08 05:48:58.094529 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-08 05:48:58.094539 | orchestrator | Sunday 08 February 2026 05:48:53 +0000 (0:00:01.687) 0:03:42.947 ******* 2026-02-08 05:48:58.094566 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.261863 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.261992 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262011 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262084 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262136 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262149 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:04.262196 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:04.262221 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:04.262247 | orchestrator | 2026-02-08 05:49:04.262296 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-08 05:49:04.262333 | orchestrator | Sunday 08 February 2026 05:48:58 +0000 (0:00:04.216) 0:03:47.164 ******* 2026-02-08 05:49:04.262360 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262376 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262391 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262404 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:04.262429 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.034592 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.034742 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.034796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:19.034827 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.034840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:19.034851 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.034863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:19.034875 | orchestrator | 2026-02-08 05:49:19.034889 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-08 05:49:19.034901 | orchestrator | Sunday 08 February 2026 05:49:04 +0000 (0:00:06.174) 0:03:53.339 ******* 2026-02-08 05:49:19.034913 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-08 05:49:19.034924 | orchestrator | 2026-02-08 05:49:19.034935 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-08 05:49:19.034946 | orchestrator | Sunday 08 February 2026 05:49:06 +0000 (0:00:02.087) 0:03:55.426 ******* 2026-02-08 05:49:19.034957 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:49:19.034969 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:49:19.034997 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:49:19.035009 | orchestrator | 2026-02-08 05:49:19.035020 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-08 05:49:19.035031 | orchestrator | Sunday 08 February 2026 05:49:08 +0000 (0:00:01.765) 0:03:57.192 ******* 2026-02-08 05:49:19.035042 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:49:19.035053 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:49:19.035064 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:49:19.035075 | orchestrator | 2026-02-08 05:49:19.035086 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-08 05:49:19.035108 | orchestrator | Sunday 08 February 2026 05:49:10 +0000 (0:00:02.662) 0:03:59.854 ******* 2026-02-08 05:49:19.035122 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:49:19.035135 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:49:19.035149 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:49:19.035162 | orchestrator | 2026-02-08 05:49:19.035175 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-08 05:49:19.035188 | orchestrator | Sunday 08 February 2026 05:49:13 +0000 (0:00:02.793) 0:04:02.648 ******* 2026-02-08 05:49:19.035203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.035224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.035238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.035252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.035266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.035382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:19.035414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:23.664159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:23.664412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:49:23.664438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664449 | orchestrator | 2026-02-08 05:49:23.664461 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-08 05:49:23.664474 | orchestrator | Sunday 08 February 2026 05:49:19 +0000 (0:00:05.445) 0:04:08.093 ******* 2026-02-08 05:49:23.664486 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:49:23.664498 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:49:23.664509 | orchestrator | } 2026-02-08 05:49:23.664520 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:49:23.664530 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:49:23.664541 | orchestrator | } 2026-02-08 05:49:23.664550 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:49:23.664561 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:49:23.664571 | orchestrator | } 2026-02-08 05:49:23.664581 | orchestrator | 2026-02-08 05:49:23.664620 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-08 05:49:23.664631 | orchestrator | Sunday 08 February 2026 05:49:20 +0000 (0:00:01.444) 0:04:09.537 ******* 2026-02-08 05:49:23.664644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-08 05:49:23.664861 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-08 05:50:52.949571 | orchestrator | 2026-02-08 05:50:52.949694 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-08 05:50:52.949712 | orchestrator | Sunday 08 February 2026 05:49:23 +0000 (0:00:03.193) 0:04:12.731 ******* 2026-02-08 05:50:52.949723 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-08 05:50:52.949734 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-08 05:50:52.949744 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-08 05:50:52.949754 | orchestrator | 2026-02-08 05:50:52.949765 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-08 05:50:52.949775 | orchestrator | Sunday 08 February 2026 05:49:25 +0000 (0:00:02.319) 0:04:15.051 ******* 2026-02-08 05:50:52.949785 | orchestrator | changed: [testbed-node-0] => { 2026-02-08 05:50:52.949796 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:50:52.949806 | orchestrator | } 2026-02-08 05:50:52.949816 | orchestrator | changed: [testbed-node-1] => { 2026-02-08 05:50:52.949826 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:50:52.949836 | orchestrator | } 2026-02-08 05:50:52.949846 | orchestrator | changed: [testbed-node-2] => { 2026-02-08 05:50:52.949856 | orchestrator |  "msg": "Notifying handlers" 2026-02-08 05:50:52.949866 | orchestrator | } 2026-02-08 05:50:52.949875 | orchestrator | 2026-02-08 05:50:52.949885 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-08 05:50:52.949910 | orchestrator | Sunday 08 February 2026 05:49:27 +0000 (0:00:01.496) 0:04:16.548 ******* 2026-02-08 05:50:52.949921 | orchestrator | 2026-02-08 05:50:52.949931 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-08 05:50:52.949941 | orchestrator | Sunday 08 February 2026 05:49:27 +0000 (0:00:00.436) 0:04:16.984 ******* 2026-02-08 05:50:52.949951 | orchestrator | 2026-02-08 05:50:52.949960 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-08 05:50:52.949970 | orchestrator | Sunday 08 February 2026 05:49:28 +0000 (0:00:00.452) 0:04:17.437 ******* 2026-02-08 05:50:52.949980 | orchestrator | 2026-02-08 05:50:52.949990 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-08 05:50:52.950072 | orchestrator | Sunday 08 February 2026 05:49:29 +0000 (0:00:01.068) 0:04:18.506 ******* 2026-02-08 05:50:52.950085 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:50:52.950094 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:50:52.950104 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:50:52.950115 | orchestrator | 2026-02-08 05:50:52.950127 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-08 05:50:52.950138 | orchestrator | Sunday 08 February 2026 05:49:46 +0000 (0:00:17.293) 0:04:35.799 ******* 2026-02-08 05:50:52.950149 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:50:52.950160 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:50:52.950172 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:50:52.950182 | orchestrator | 2026-02-08 05:50:52.950193 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-08 05:50:52.950204 | orchestrator | Sunday 08 February 2026 05:50:04 +0000 (0:00:17.490) 0:04:53.289 ******* 2026-02-08 05:50:52.950215 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-08 05:50:52.950226 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-08 05:50:52.950237 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-08 05:50:52.950248 | orchestrator | 2026-02-08 05:50:52.950259 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-08 05:50:52.950271 | orchestrator | Sunday 08 February 2026 05:50:15 +0000 (0:00:11.043) 0:05:04.333 ******* 2026-02-08 05:50:52.950282 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:50:52.950293 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:50:52.950305 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:50:52.950317 | orchestrator | 2026-02-08 05:50:52.950329 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-08 05:50:52.950361 | orchestrator | Sunday 08 February 2026 05:50:32 +0000 (0:00:17.351) 0:05:21.685 ******* 2026-02-08 05:50:52.950373 | orchestrator | Pausing for 5 seconds 2026-02-08 05:50:52.950385 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:50:52.950396 | orchestrator | 2026-02-08 05:50:52.950407 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-08 05:50:52.950418 | orchestrator | Sunday 08 February 2026 05:50:38 +0000 (0:00:06.180) 0:05:27.865 ******* 2026-02-08 05:50:52.950429 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:50:52.950440 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:50:52.950452 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:50:52.950462 | orchestrator | 2026-02-08 05:50:52.950473 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-08 05:50:52.950483 | orchestrator | Sunday 08 February 2026 05:50:40 +0000 (0:00:01.827) 0:05:29.693 ******* 2026-02-08 05:50:52.950493 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:50:52.950502 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:50:52.950512 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:50:52.950521 | orchestrator | 2026-02-08 05:50:52.950531 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-08 05:50:52.950541 | orchestrator | Sunday 08 February 2026 05:50:42 +0000 (0:00:01.701) 0:05:31.394 ******* 2026-02-08 05:50:52.950550 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:50:52.950560 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:50:52.950569 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:50:52.950579 | orchestrator | 2026-02-08 05:50:52.950588 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-08 05:50:52.950598 | orchestrator | Sunday 08 February 2026 05:50:44 +0000 (0:00:01.815) 0:05:33.210 ******* 2026-02-08 05:50:52.950607 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:50:52.950617 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:50:52.950627 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:50:52.950636 | orchestrator | 2026-02-08 05:50:52.950646 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-08 05:50:52.950655 | orchestrator | Sunday 08 February 2026 05:50:45 +0000 (0:00:01.705) 0:05:34.915 ******* 2026-02-08 05:50:52.950673 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:50:52.950683 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:50:52.950692 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:50:52.950701 | orchestrator | 2026-02-08 05:50:52.950711 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-08 05:50:52.950737 | orchestrator | Sunday 08 February 2026 05:50:47 +0000 (0:00:01.809) 0:05:36.725 ******* 2026-02-08 05:50:52.950748 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:50:52.950757 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:50:52.950767 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:50:52.950776 | orchestrator | 2026-02-08 05:50:52.950786 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-08 05:50:52.950795 | orchestrator | Sunday 08 February 2026 05:50:49 +0000 (0:00:01.850) 0:05:38.576 ******* 2026-02-08 05:50:52.950805 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-08 05:50:52.950815 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-08 05:50:52.950825 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-08 05:50:52.950834 | orchestrator | 2026-02-08 05:50:52.950844 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 05:50:52.950855 | orchestrator | testbed-node-0 : ok=50  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-08 05:50:52.950866 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-08 05:50:52.950882 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-08 05:50:52.950892 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 05:50:52.950902 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 05:50:52.950912 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 05:50:52.950921 | orchestrator | 2026-02-08 05:50:52.950931 | orchestrator | 2026-02-08 05:50:52.950941 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 05:50:52.950951 | orchestrator | Sunday 08 February 2026 05:50:52 +0000 (0:00:03.032) 0:05:41.608 ******* 2026-02-08 05:50:52.950960 | orchestrator | =============================================================================== 2026-02-08 05:50:52.950970 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 132.10s 2026-02-08 05:50:52.950980 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.21s 2026-02-08 05:50:52.950989 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 17.49s 2026-02-08 05:50:52.950999 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.35s 2026-02-08 05:50:52.951009 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 17.29s 2026-02-08 05:50:52.951018 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 11.04s 2026-02-08 05:50:52.951028 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.18s 2026-02-08 05:50:52.951037 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.17s 2026-02-08 05:50:52.951047 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.45s 2026-02-08 05:50:52.951056 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.22s 2026-02-08 05:50:52.951066 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.76s 2026-02-08 05:50:52.951075 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.63s 2026-02-08 05:50:52.951085 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.59s 2026-02-08 05:50:52.951100 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.19s 2026-02-08 05:50:52.951110 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.08s 2026-02-08 05:50:52.951119 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.04s 2026-02-08 05:50:52.951129 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.03s 2026-02-08 05:50:52.951138 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.79s 2026-02-08 05:50:52.951148 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.66s 2026-02-08 05:50:52.951157 | orchestrator | service-check-containers : ovn_controller | Check containers ------------ 2.63s 2026-02-08 05:50:53.267625 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-08 05:50:53.267724 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-08 05:50:53.267748 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-08 05:50:53.274678 | orchestrator | + set -e 2026-02-08 05:50:53.274746 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 05:50:53.274768 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 05:50:53.274782 | orchestrator | ++ INTERACTIVE=false 2026-02-08 05:50:53.274801 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 05:50:53.274819 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 05:50:53.274838 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-08 05:50:55.350125 | orchestrator | 2026-02-08 05:50:55 | INFO  | Task 28a194ab-585e-44bc-b345-ba4d9ea338e0 (ceph-rolling_update) was prepared for execution. 2026-02-08 05:50:55.350221 | orchestrator | 2026-02-08 05:50:55 | INFO  | It takes a moment until task 28a194ab-585e-44bc-b345-ba4d9ea338e0 (ceph-rolling_update) has been started and output is visible here. 2026-02-08 05:51:56.936647 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-08 05:51:56.936748 | orchestrator | 2.16.14 2026-02-08 05:51:56.936762 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-08 05:51:56.936773 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-08 05:51:56.936790 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-08 05:51:56.936798 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-08 05:51:56.936814 | orchestrator | 2026-02-08 05:51:56.936823 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-08 05:51:56.936831 | orchestrator | 2026-02-08 05:51:56.936840 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-08 05:51:56.936849 | orchestrator | Sunday 08 February 2026 05:51:03 +0000 (0:00:01.315) 0:00:01.315 ******* 2026-02-08 05:51:56.936857 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-08 05:51:56.936866 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-08 05:51:56.936889 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-08 05:51:56.936898 | orchestrator | skipping: [localhost] 2026-02-08 05:51:56.936908 | orchestrator | 2026-02-08 05:51:56.936916 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-08 05:51:56.936925 | orchestrator | 2026-02-08 05:51:56.936933 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-08 05:51:56.936941 | orchestrator | Sunday 08 February 2026 05:51:04 +0000 (0:00:00.911) 0:00:02.227 ******* 2026-02-08 05:51:56.936950 | orchestrator | ok: [testbed-node-0] => { 2026-02-08 05:51:56.936958 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-08 05:51:56.936967 | orchestrator | } 2026-02-08 05:51:56.936976 | orchestrator | ok: [testbed-node-1] => { 2026-02-08 05:51:56.937005 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-08 05:51:56.937014 | orchestrator | } 2026-02-08 05:51:56.937023 | orchestrator | ok: [testbed-node-2] => { 2026-02-08 05:51:56.937031 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-08 05:51:56.937040 | orchestrator | } 2026-02-08 05:51:56.937049 | orchestrator | ok: [testbed-node-3] => { 2026-02-08 05:51:56.937057 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-08 05:51:56.937065 | orchestrator | } 2026-02-08 05:51:56.937073 | orchestrator | ok: [testbed-node-4] => { 2026-02-08 05:51:56.937082 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-08 05:51:56.937090 | orchestrator | } 2026-02-08 05:51:56.937098 | orchestrator | ok: [testbed-node-5] => { 2026-02-08 05:51:56.937106 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-08 05:51:56.937115 | orchestrator | } 2026-02-08 05:51:56.937123 | orchestrator | ok: [testbed-manager] => { 2026-02-08 05:51:56.937131 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-08 05:51:56.937140 | orchestrator | } 2026-02-08 05:51:56.937148 | orchestrator | 2026-02-08 05:51:56.937157 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-08 05:51:56.937166 | orchestrator | Sunday 08 February 2026 05:51:06 +0000 (0:00:02.686) 0:00:04.913 ******* 2026-02-08 05:51:56.937174 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:51:56.937182 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:51:56.937190 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:51:56.937199 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:51:56.937208 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:51:56.937217 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:51:56.937225 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.937234 | orchestrator | 2026-02-08 05:51:56.937243 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-08 05:51:56.937252 | orchestrator | Sunday 08 February 2026 05:51:12 +0000 (0:00:05.880) 0:00:10.794 ******* 2026-02-08 05:51:56.937261 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 05:51:56.937270 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:51:56.937279 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 05:51:56.937288 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:51:56.937297 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 05:51:56.937306 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 05:51:56.937315 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:51:56.937323 | orchestrator | 2026-02-08 05:51:56.937332 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-08 05:51:56.937341 | orchestrator | Sunday 08 February 2026 05:51:43 +0000 (0:00:31.118) 0:00:41.912 ******* 2026-02-08 05:51:56.937350 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:51:56.937360 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:51:56.937369 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:51:56.937377 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:51:56.937404 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:51:56.937413 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:51:56.937421 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.937430 | orchestrator | 2026-02-08 05:51:56.937439 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 05:51:56.937448 | orchestrator | Sunday 08 February 2026 05:51:44 +0000 (0:00:00.985) 0:00:42.897 ******* 2026-02-08 05:51:56.937472 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-08 05:51:56.937491 | orchestrator | 2026-02-08 05:51:56.937500 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 05:51:56.937508 | orchestrator | Sunday 08 February 2026 05:51:46 +0000 (0:00:01.953) 0:00:44.851 ******* 2026-02-08 05:51:56.937516 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:51:56.937523 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:51:56.937531 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:51:56.937539 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:51:56.937546 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:51:56.937554 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:51:56.937562 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.937569 | orchestrator | 2026-02-08 05:51:56.937578 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 05:51:56.937586 | orchestrator | Sunday 08 February 2026 05:51:48 +0000 (0:00:01.406) 0:00:46.258 ******* 2026-02-08 05:51:56.937594 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:51:56.937601 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:51:56.937608 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:51:56.937615 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:51:56.937622 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:51:56.937629 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:51:56.937636 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.937643 | orchestrator | 2026-02-08 05:51:56.937649 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 05:51:56.937662 | orchestrator | Sunday 08 February 2026 05:51:48 +0000 (0:00:00.755) 0:00:47.013 ******* 2026-02-08 05:51:56.937670 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:51:56.937677 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:51:56.937685 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:51:56.937692 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:51:56.937700 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:51:56.937708 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:51:56.937716 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.937724 | orchestrator | 2026-02-08 05:51:56.937732 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 05:51:56.937740 | orchestrator | Sunday 08 February 2026 05:51:50 +0000 (0:00:01.377) 0:00:48.391 ******* 2026-02-08 05:51:56.937747 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:51:56.937754 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:51:56.937762 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:51:56.937769 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:51:56.937777 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:51:56.937785 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:51:56.937793 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.937801 | orchestrator | 2026-02-08 05:51:56.937809 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 05:51:56.937817 | orchestrator | Sunday 08 February 2026 05:51:51 +0000 (0:00:00.792) 0:00:49.183 ******* 2026-02-08 05:51:56.937825 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:51:56.937833 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:51:56.937842 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:51:56.937851 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:51:56.937858 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:51:56.937865 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:51:56.937873 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.937880 | orchestrator | 2026-02-08 05:51:56.937887 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 05:51:56.937895 | orchestrator | Sunday 08 February 2026 05:51:52 +0000 (0:00:00.997) 0:00:50.180 ******* 2026-02-08 05:51:56.937901 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:51:56.937908 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:51:56.937915 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:51:56.937921 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:51:56.937928 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:51:56.937935 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:51:56.937943 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.937958 | orchestrator | 2026-02-08 05:51:56.937966 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 05:51:56.937974 | orchestrator | Sunday 08 February 2026 05:51:52 +0000 (0:00:00.770) 0:00:50.951 ******* 2026-02-08 05:51:56.937982 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:51:56.937989 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:51:56.937997 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:51:56.938004 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:51:56.938011 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:51:56.938072 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:51:56.938081 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:51:56.938089 | orchestrator | 2026-02-08 05:51:56.938097 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 05:51:56.938105 | orchestrator | Sunday 08 February 2026 05:51:53 +0000 (0:00:01.044) 0:00:51.995 ******* 2026-02-08 05:51:56.938113 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:51:56.938121 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:51:56.938128 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:51:56.938135 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:51:56.938143 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:51:56.938151 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:51:56.938159 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.938167 | orchestrator | 2026-02-08 05:51:56.938176 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 05:51:56.938184 | orchestrator | Sunday 08 February 2026 05:51:54 +0000 (0:00:00.767) 0:00:52.762 ******* 2026-02-08 05:51:56.938192 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:51:56.938201 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:51:56.938209 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:51:56.938217 | orchestrator | 2026-02-08 05:51:56.938225 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 05:51:56.938233 | orchestrator | Sunday 08 February 2026 05:51:55 +0000 (0:00:01.230) 0:00:53.993 ******* 2026-02-08 05:51:56.938241 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:51:56.938249 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:51:56.938256 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:51:56.938264 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:51:56.938272 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:51:56.938279 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:51:56.938287 | orchestrator | ok: [testbed-manager] 2026-02-08 05:51:56.938296 | orchestrator | 2026-02-08 05:51:56.938304 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 05:51:56.938321 | orchestrator | Sunday 08 February 2026 05:51:56 +0000 (0:00:00.969) 0:00:54.963 ******* 2026-02-08 05:52:09.658812 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:52:09.658929 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:52:09.658946 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:52:09.658958 | orchestrator | 2026-02-08 05:52:09.658971 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 05:52:09.658983 | orchestrator | Sunday 08 February 2026 05:51:59 +0000 (0:00:02.329) 0:00:57.292 ******* 2026-02-08 05:52:09.658994 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 05:52:09.659009 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 05:52:09.659027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 05:52:09.659045 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:09.659070 | orchestrator | 2026-02-08 05:52:09.659098 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 05:52:09.659117 | orchestrator | Sunday 08 February 2026 05:51:59 +0000 (0:00:00.432) 0:00:57.725 ******* 2026-02-08 05:52:09.659189 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 05:52:09.659210 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 05:52:09.659226 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 05:52:09.659242 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:09.659259 | orchestrator | 2026-02-08 05:52:09.659275 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 05:52:09.659292 | orchestrator | Sunday 08 February 2026 05:52:00 +0000 (0:00:00.926) 0:00:58.652 ******* 2026-02-08 05:52:09.659312 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:09.659334 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:09.659354 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:09.659373 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:09.659424 | orchestrator | 2026-02-08 05:52:09.659446 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 05:52:09.659460 | orchestrator | Sunday 08 February 2026 05:52:00 +0000 (0:00:00.167) 0:00:58.819 ******* 2026-02-08 05:52:09.659476 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '814c3ba0cfa5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 05:51:57.644921', 'end': '2026-02-08 05:51:57.696096', 'delta': '0:00:00.051175', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['814c3ba0cfa5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 05:52:09.659518 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd108d94fad94', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 05:51:58.239442', 'end': '2026-02-08 05:51:58.291184', 'delta': '0:00:00.051742', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d108d94fad94'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 05:52:09.659545 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '83b6b87b68f7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 05:51:59.070093', 'end': '2026-02-08 05:51:59.106552', 'delta': '0:00:00.036459', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['83b6b87b68f7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 05:52:09.659559 | orchestrator | 2026-02-08 05:52:09.659572 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 05:52:09.659587 | orchestrator | Sunday 08 February 2026 05:52:00 +0000 (0:00:00.220) 0:00:59.040 ******* 2026-02-08 05:52:09.659600 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:52:09.659613 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:52:09.659627 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:52:09.659639 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:52:09.659652 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:52:09.659665 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:52:09.659678 | orchestrator | ok: [testbed-manager] 2026-02-08 05:52:09.659691 | orchestrator | 2026-02-08 05:52:09.659704 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 05:52:09.659758 | orchestrator | Sunday 08 February 2026 05:52:02 +0000 (0:00:01.195) 0:01:00.236 ******* 2026-02-08 05:52:09.659771 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:09.659782 | orchestrator | 2026-02-08 05:52:09.659793 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 05:52:09.659804 | orchestrator | Sunday 08 February 2026 05:52:02 +0000 (0:00:00.234) 0:01:00.470 ******* 2026-02-08 05:52:09.659815 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:52:09.659826 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:52:09.659836 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:52:09.659847 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:52:09.659857 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:52:09.659868 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:52:09.659878 | orchestrator | ok: [testbed-manager] 2026-02-08 05:52:09.659889 | orchestrator | 2026-02-08 05:52:09.659900 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 05:52:09.659911 | orchestrator | Sunday 08 February 2026 05:52:03 +0000 (0:00:01.033) 0:01:01.504 ******* 2026-02-08 05:52:09.659921 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:52:09.659932 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:52:09.659944 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:52:09.659955 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:52:09.659965 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:52:09.659976 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:52:09.659987 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-08 05:52:09.659998 | orchestrator | 2026-02-08 05:52:09.660009 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 05:52:09.660020 | orchestrator | Sunday 08 February 2026 05:52:06 +0000 (0:00:03.383) 0:01:04.888 ******* 2026-02-08 05:52:09.660031 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:52:09.660042 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:52:09.660053 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:52:09.660063 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:52:09.660082 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:52:09.660092 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:52:09.660103 | orchestrator | ok: [testbed-manager] 2026-02-08 05:52:09.660114 | orchestrator | 2026-02-08 05:52:09.660125 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 05:52:09.660136 | orchestrator | Sunday 08 February 2026 05:52:07 +0000 (0:00:01.032) 0:01:05.920 ******* 2026-02-08 05:52:09.660147 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:09.660158 | orchestrator | 2026-02-08 05:52:09.660169 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 05:52:09.660181 | orchestrator | Sunday 08 February 2026 05:52:08 +0000 (0:00:00.135) 0:01:06.056 ******* 2026-02-08 05:52:09.660200 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:09.660227 | orchestrator | 2026-02-08 05:52:09.660249 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 05:52:09.660267 | orchestrator | Sunday 08 February 2026 05:52:08 +0000 (0:00:00.233) 0:01:06.289 ******* 2026-02-08 05:52:09.660284 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:09.660302 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:09.660319 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:09.660337 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:09.660355 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:09.660431 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:15.339002 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:15.339112 | orchestrator | 2026-02-08 05:52:15.339129 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 05:52:15.339143 | orchestrator | Sunday 08 February 2026 05:52:09 +0000 (0:00:01.404) 0:01:07.693 ******* 2026-02-08 05:52:15.339154 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:15.339166 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:15.339177 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:15.339188 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:15.339199 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:15.339210 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:15.339221 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:15.339252 | orchestrator | 2026-02-08 05:52:15.339264 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 05:52:15.339276 | orchestrator | Sunday 08 February 2026 05:52:10 +0000 (0:00:00.770) 0:01:08.464 ******* 2026-02-08 05:52:15.339287 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:15.339302 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:15.339322 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:15.339340 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:15.339359 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:15.339379 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:15.339481 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:15.339497 | orchestrator | 2026-02-08 05:52:15.339508 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 05:52:15.339519 | orchestrator | Sunday 08 February 2026 05:52:11 +0000 (0:00:01.019) 0:01:09.483 ******* 2026-02-08 05:52:15.339530 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:15.339543 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:15.339556 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:15.339569 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:15.339582 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:15.339595 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:15.339607 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:15.339619 | orchestrator | 2026-02-08 05:52:15.339632 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 05:52:15.339645 | orchestrator | Sunday 08 February 2026 05:52:12 +0000 (0:00:00.758) 0:01:10.241 ******* 2026-02-08 05:52:15.339658 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:15.339670 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:15.339705 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:15.339718 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:15.339731 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:15.339743 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:15.339757 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:15.339769 | orchestrator | 2026-02-08 05:52:15.339782 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 05:52:15.339815 | orchestrator | Sunday 08 February 2026 05:52:13 +0000 (0:00:01.003) 0:01:11.245 ******* 2026-02-08 05:52:15.339832 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:15.339850 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:15.339869 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:15.339888 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:15.339908 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:15.339927 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:15.339942 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:15.339953 | orchestrator | 2026-02-08 05:52:15.339963 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 05:52:15.339974 | orchestrator | Sunday 08 February 2026 05:52:13 +0000 (0:00:00.783) 0:01:12.028 ******* 2026-02-08 05:52:15.339985 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:15.339995 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:15.340006 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:15.340018 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:15.340028 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:15.340042 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:15.340060 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:15.340079 | orchestrator | 2026-02-08 05:52:15.340098 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 05:52:15.340208 | orchestrator | Sunday 08 February 2026 05:52:15 +0000 (0:00:01.087) 0:01:13.116 ******* 2026-02-08 05:52:15.340223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.340238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.340250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.340285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:52:15.340308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.340332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.340344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.340359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e566a5b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:15.340381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.480980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:52:15.481184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bd3944a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:15.481271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481284 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:15.481297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.481351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:52:15.631578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.631658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.631668 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:15.631677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.631686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f0c6f27', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:15.631710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.631731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.631742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.631750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'uuids': ['f792a4bc-313c-4001-af18-36958a398c99'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH']}})  2026-02-08 05:52:15.631759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b3b2ead', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:15.631771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4']}})  2026-02-08 05:52:15.631783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.631794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.631817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:52:15.818901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.819026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc', 'dm-uuid-CRYPT-LUKS2-b54d8ff93c624789820c72a73c143a76-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 05:52:15.819046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.819060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'uuids': ['b54d8ff9-3c62-4789-820c-72a73c143a76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc']}})  2026-02-08 05:52:15.819073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757']}})  2026-02-08 05:52:15.819117 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:15.819131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.819177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8eb95c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:15.819193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.819205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.819217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH', 'dm-uuid-CRYPT-LUKS2-f792a4bc313c4001af1836958a398c99-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 05:52:15.819236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.819249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'uuids': ['4e6e3c31-d8d4-42a4-8244-dd9f1ae0f37a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm']}})  2026-02-08 05:52:15.819273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '33bf36ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:15.920070 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc']}})  2026-02-08 05:52:15.920174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.920191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.920204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:52:15.920237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.920248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH', 'dm-uuid-CRYPT-LUKS2-d4dc753534094cc38b724c68e8894d37-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 05:52:15.920258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.920299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'uuids': ['d4dc7535-3409-4cc3-8b72-4c68e8894d37'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH']}})  2026-02-08 05:52:15.920311 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046']}})  2026-02-08 05:52:15.920322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.920334 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6152c601', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:15.920357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:15.920374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.134448 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:16.134553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm', 'dm-uuid-CRYPT-LUKS2-4e6e3c31d8d442a48244dd9f1ae0f37a-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 05:52:16.134576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.134599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'uuids': ['1cd382bb-e631-457f-8e23-7bd2a48df188'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx']}})  2026-02-08 05:52:16.134649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '380fccde', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:16.134672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a']}})  2026-02-08 05:52:16.134695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.134708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.134740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:52:16.134753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.134765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM', 'dm-uuid-CRYPT-LUKS2-acd8aeb3a91948899ba0cb5b1d4bdfb1-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 05:52:16.134785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.134797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'uuids': ['acd8aeb3-a919-4889-9ba0-cb5b1d4bdfb1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM']}})  2026-02-08 05:52:16.134814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307']}})  2026-02-08 05:52:16.134826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.134850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b6d2541', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:16.787318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx', 'dm-uuid-CRYPT-LUKS2-1cd382bbe631457f8e237bd2a48df188-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787486 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:16.787514 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:16.787525 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787535 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787545 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787581 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-33-14-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:52:16.787591 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787617 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787628 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787647 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8e0ebcee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part16', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part14', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part15', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part1', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:52:16.787666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:52:16.787686 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:16.787696 | orchestrator | 2026-02-08 05:52:16.787708 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 05:52:16.787718 | orchestrator | Sunday 08 February 2026 05:52:16 +0000 (0:00:01.328) 0:01:14.445 ******* 2026-02-08 05:52:16.787737 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920451 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920566 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920583 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920594 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920625 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920633 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920664 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e566a5b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920676 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920682 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920688 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:16.920699 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.138948 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139082 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139122 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139136 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139147 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139189 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bd3944a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139212 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139225 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139237 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:17.139251 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139263 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.139283 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.288853 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.288984 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.289009 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.289024 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.289072 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f0c6f27', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.289098 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:17.289109 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.289118 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.289127 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.289137 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'uuids': ['f792a4bc-313c-4001-af18-36958a398c99'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.289164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b3b2ead', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425040 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425128 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425163 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425172 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:17.425211 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc', 'dm-uuid-CRYPT-LUKS2-b54d8ff93c624789820c72a73c143a76-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'uuids': ['b54d8ff9-3c62-4789-820c-72a73c143a76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425257 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425266 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.425287 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8eb95c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557085 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH', 'dm-uuid-CRYPT-LUKS2-f792a4bc313c4001af1836958a398c99-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557266 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557289 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'uuids': ['4e6e3c31-d8d4-42a4-8244-dd9f1ae0f37a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557334 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '33bf36ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557356 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557379 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557531 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557558 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557579 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.557617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH', 'dm-uuid-CRYPT-LUKS2-d4dc753534094cc38b724c68e8894d37-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781584 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781685 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'uuids': ['d4dc7535-3409-4cc3-8b72-4c68e8894d37'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781741 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781757 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781791 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6152c601', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781815 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:17.781834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781848 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781860 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm', 'dm-uuid-CRYPT-LUKS2-4e6e3c31d8d442a48244dd9f1ae0f37a-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.781893 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'uuids': ['1cd382bb-e631-457f-8e23-7bd2a48df188'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.913724 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '380fccde', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.913938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.913961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.913972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.913980 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.914006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.914097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM', 'dm-uuid-CRYPT-LUKS2-acd8aeb3a91948899ba0cb5b1d4bdfb1-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.914108 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.914118 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'uuids': ['acd8aeb3-a919-4889-9ba0-cb5b1d4bdfb1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.914128 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:17.914138 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307']}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:17.914154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.056908 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b6d2541', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.057012 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.057039 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.057084 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.057162 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx', 'dm-uuid-CRYPT-LUKS2-1cd382bbe631457f8e237bd2a48df188-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.057186 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.057199 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.057211 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:18.057225 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-33-14-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.057237 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:18.057265 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:26.510535 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:26.510739 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8e0ebcee', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part16', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part14', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part15', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part1', 'scsi-SQEMU_QEMU_HARDDISK_8e0ebcee-3d0a-448e-8b07-4380ef670051-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:26.510772 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:26.510846 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:52:26.510870 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:26.510893 | orchestrator | 2026-02-08 05:52:26.510915 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 05:52:26.510937 | orchestrator | Sunday 08 February 2026 05:52:18 +0000 (0:00:01.655) 0:01:16.100 ******* 2026-02-08 05:52:26.510957 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:52:26.510978 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:52:26.510999 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:52:26.511020 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:52:26.511039 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:52:26.511057 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:52:26.511076 | orchestrator | ok: [testbed-manager] 2026-02-08 05:52:26.511095 | orchestrator | 2026-02-08 05:52:26.511113 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 05:52:26.511132 | orchestrator | Sunday 08 February 2026 05:52:19 +0000 (0:00:01.367) 0:01:17.468 ******* 2026-02-08 05:52:26.511149 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:52:26.511169 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:52:26.511189 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:52:26.511210 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:52:26.511230 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:52:26.511248 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:52:26.511266 | orchestrator | ok: [testbed-manager] 2026-02-08 05:52:26.511286 | orchestrator | 2026-02-08 05:52:26.511315 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 05:52:26.511337 | orchestrator | Sunday 08 February 2026 05:52:20 +0000 (0:00:00.742) 0:01:18.210 ******* 2026-02-08 05:52:26.511357 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:52:26.511377 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:52:26.511396 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:52:26.511446 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:52:26.511466 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:52:26.511485 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:26.511505 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:52:26.511525 | orchestrator | 2026-02-08 05:52:26.511545 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 05:52:26.511565 | orchestrator | Sunday 08 February 2026 05:52:21 +0000 (0:00:01.267) 0:01:19.478 ******* 2026-02-08 05:52:26.511585 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:26.511604 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:26.511624 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:26.511641 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:26.511658 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:26.511677 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:26.511695 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:26.511714 | orchestrator | 2026-02-08 05:52:26.511732 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 05:52:26.511750 | orchestrator | Sunday 08 February 2026 05:52:22 +0000 (0:00:00.760) 0:01:20.238 ******* 2026-02-08 05:52:26.511768 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:26.511803 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:26.511822 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:26.511840 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:26.511859 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:26.511877 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:26.511894 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-08 05:52:26.511913 | orchestrator | 2026-02-08 05:52:26.511931 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 05:52:26.511950 | orchestrator | Sunday 08 February 2026 05:52:23 +0000 (0:00:01.587) 0:01:21.826 ******* 2026-02-08 05:52:26.511968 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:26.511986 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:26.512004 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:26.512023 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:26.512041 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:26.512057 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:26.512068 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:26.512079 | orchestrator | 2026-02-08 05:52:26.512090 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 05:52:26.512100 | orchestrator | Sunday 08 February 2026 05:52:24 +0000 (0:00:00.805) 0:01:22.631 ******* 2026-02-08 05:52:26.512111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:52:26.512122 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-08 05:52:26.512133 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-08 05:52:26.512145 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-08 05:52:26.512164 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 05:52:26.512182 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-08 05:52:26.512199 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-08 05:52:26.512217 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-08 05:52:26.512236 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-08 05:52:26.512255 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-08 05:52:26.512274 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-08 05:52:26.512294 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-08 05:52:26.512311 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 05:52:26.512329 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-08 05:52:26.512347 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-08 05:52:26.512365 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-08 05:52:26.512383 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-08 05:52:26.512431 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-08 05:52:26.512468 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-08 05:52:26.512488 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-08 05:52:26.512508 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-08 05:52:26.512527 | orchestrator | 2026-02-08 05:52:26.512542 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 05:52:26.512568 | orchestrator | Sunday 08 February 2026 05:52:26 +0000 (0:00:01.908) 0:01:24.539 ******* 2026-02-08 05:52:50.735550 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 05:52:50.735694 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 05:52:50.735723 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 05:52:50.735741 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:50.735761 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 05:52:50.735805 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 05:52:50.735846 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 05:52:50.735898 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:50.735911 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 05:52:50.735922 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 05:52:50.735933 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 05:52:50.735944 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:50.735955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 05:52:50.735966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 05:52:50.736127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 05:52:50.736158 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:50.736176 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-08 05:52:50.736196 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-08 05:52:50.736215 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-08 05:52:50.736236 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:50.736311 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 05:52:50.736327 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 05:52:50.736339 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 05:52:50.736350 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:50.736363 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-08 05:52:50.736382 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-08 05:52:50.736399 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-08 05:52:50.736483 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:50.736505 | orchestrator | 2026-02-08 05:52:50.736519 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 05:52:50.736531 | orchestrator | Sunday 08 February 2026 05:52:27 +0000 (0:00:01.082) 0:01:25.622 ******* 2026-02-08 05:52:50.736542 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:50.736553 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:50.736564 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:50.736574 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:50.736586 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:52:50.736598 | orchestrator | 2026-02-08 05:52:50.736609 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 05:52:50.736622 | orchestrator | Sunday 08 February 2026 05:52:28 +0000 (0:00:01.002) 0:01:26.624 ******* 2026-02-08 05:52:50.736633 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:50.736644 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:50.736655 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:50.736665 | orchestrator | 2026-02-08 05:52:50.736677 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 05:52:50.736688 | orchestrator | Sunday 08 February 2026 05:52:29 +0000 (0:00:00.618) 0:01:27.243 ******* 2026-02-08 05:52:50.736699 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:50.736709 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:50.736720 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:50.736731 | orchestrator | 2026-02-08 05:52:50.736741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 05:52:50.736753 | orchestrator | Sunday 08 February 2026 05:52:29 +0000 (0:00:00.338) 0:01:27.581 ******* 2026-02-08 05:52:50.736763 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:50.736774 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:50.736785 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:50.736795 | orchestrator | 2026-02-08 05:52:50.736806 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 05:52:50.736817 | orchestrator | Sunday 08 February 2026 05:52:29 +0000 (0:00:00.386) 0:01:27.967 ******* 2026-02-08 05:52:50.736843 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:52:50.736857 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:52:50.736875 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:52:50.736893 | orchestrator | 2026-02-08 05:52:50.736911 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 05:52:50.736928 | orchestrator | Sunday 08 February 2026 05:52:30 +0000 (0:00:00.473) 0:01:28.441 ******* 2026-02-08 05:52:50.736945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 05:52:50.736962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 05:52:50.736980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 05:52:50.736998 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:50.737017 | orchestrator | 2026-02-08 05:52:50.737035 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 05:52:50.737053 | orchestrator | Sunday 08 February 2026 05:52:30 +0000 (0:00:00.460) 0:01:28.901 ******* 2026-02-08 05:52:50.737070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 05:52:50.737088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 05:52:50.737107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 05:52:50.737125 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:50.737143 | orchestrator | 2026-02-08 05:52:50.737154 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 05:52:50.737190 | orchestrator | Sunday 08 February 2026 05:52:31 +0000 (0:00:00.708) 0:01:29.609 ******* 2026-02-08 05:52:50.737202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 05:52:50.737213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 05:52:50.737224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 05:52:50.737235 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:50.737246 | orchestrator | 2026-02-08 05:52:50.737257 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 05:52:50.737268 | orchestrator | Sunday 08 February 2026 05:52:32 +0000 (0:00:00.659) 0:01:30.269 ******* 2026-02-08 05:52:50.737279 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:52:50.737290 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:52:50.737301 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:52:50.737311 | orchestrator | 2026-02-08 05:52:50.737322 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 05:52:50.737333 | orchestrator | Sunday 08 February 2026 05:52:32 +0000 (0:00:00.620) 0:01:30.890 ******* 2026-02-08 05:52:50.737344 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 05:52:50.737355 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 05:52:50.737366 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 05:52:50.737377 | orchestrator | 2026-02-08 05:52:50.737388 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 05:52:50.737399 | orchestrator | Sunday 08 February 2026 05:52:33 +0000 (0:00:00.580) 0:01:31.470 ******* 2026-02-08 05:52:50.737412 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:52:50.737472 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:52:50.737494 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:52:50.737514 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 05:52:50.737533 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 05:52:50.737550 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 05:52:50.737567 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 05:52:50.737578 | orchestrator | 2026-02-08 05:52:50.737589 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 05:52:50.737635 | orchestrator | Sunday 08 February 2026 05:52:34 +0000 (0:00:00.778) 0:01:32.249 ******* 2026-02-08 05:52:50.737646 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:52:50.737657 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:52:50.737668 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:52:50.737679 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 05:52:50.737690 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 05:52:50.737700 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 05:52:50.737711 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 05:52:50.737722 | orchestrator | 2026-02-08 05:52:50.737733 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-08 05:52:50.737743 | orchestrator | Sunday 08 February 2026 05:52:36 +0000 (0:00:02.357) 0:01:34.606 ******* 2026-02-08 05:52:50.737754 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:52:50.737765 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:52:50.737776 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:52:50.737787 | orchestrator | changed: [testbed-manager] 2026-02-08 05:52:50.737798 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:52:50.737808 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:52:50.737819 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:52:50.737830 | orchestrator | 2026-02-08 05:52:50.737841 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-08 05:52:50.737852 | orchestrator | Sunday 08 February 2026 05:52:46 +0000 (0:00:10.012) 0:01:44.619 ******* 2026-02-08 05:52:50.737862 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:50.737874 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:50.737884 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:50.737895 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:50.737906 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:50.737917 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:50.737928 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:50.737938 | orchestrator | 2026-02-08 05:52:50.737949 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-08 05:52:50.737960 | orchestrator | Sunday 08 February 2026 05:52:47 +0000 (0:00:01.054) 0:01:45.673 ******* 2026-02-08 05:52:50.737971 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:52:50.737982 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:52:50.737992 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:52:50.738108 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:52:50.738125 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:52:50.738136 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:52:50.738147 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:50.738158 | orchestrator | 2026-02-08 05:52:50.738168 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-08 05:52:50.738180 | orchestrator | Sunday 08 February 2026 05:52:48 +0000 (0:00:00.745) 0:01:46.418 ******* 2026-02-08 05:52:50.738191 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:52:50.738201 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:52:50.738212 | orchestrator | changed: [testbed-node-3] 2026-02-08 05:52:50.738223 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:52:50.738234 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:52:50.738244 | orchestrator | changed: [testbed-node-4] 2026-02-08 05:52:50.738255 | orchestrator | changed: [testbed-node-5] 2026-02-08 05:52:50.738266 | orchestrator | 2026-02-08 05:52:50.738288 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-08 05:53:07.661065 | orchestrator | Sunday 08 February 2026 05:52:50 +0000 (0:00:02.346) 0:01:48.765 ******* 2026-02-08 05:53:07.661179 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-08 05:53:07.661219 | orchestrator | 2026-02-08 05:53:07.661233 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-08 05:53:07.661244 | orchestrator | Sunday 08 February 2026 05:52:52 +0000 (0:00:01.872) 0:01:50.637 ******* 2026-02-08 05:53:07.661256 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.661268 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.661279 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.661289 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.661305 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.661324 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.661343 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.661361 | orchestrator | 2026-02-08 05:53:07.661379 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-08 05:53:07.661416 | orchestrator | Sunday 08 February 2026 05:52:53 +0000 (0:00:01.041) 0:01:51.679 ******* 2026-02-08 05:53:07.661504 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.661517 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.661528 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.661539 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.661549 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.661560 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.661570 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.661581 | orchestrator | 2026-02-08 05:53:07.661593 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-08 05:53:07.661606 | orchestrator | Sunday 08 February 2026 05:52:54 +0000 (0:00:01.015) 0:01:52.694 ******* 2026-02-08 05:53:07.661625 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.661643 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.661662 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.661682 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.661701 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.661720 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.661734 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.661748 | orchestrator | 2026-02-08 05:53:07.661761 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-08 05:53:07.661774 | orchestrator | Sunday 08 February 2026 05:52:55 +0000 (0:00:00.862) 0:01:53.557 ******* 2026-02-08 05:53:07.661787 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.661799 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.661813 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.661825 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.661837 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.661850 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.661862 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.661874 | orchestrator | 2026-02-08 05:53:07.661888 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-08 05:53:07.661900 | orchestrator | Sunday 08 February 2026 05:52:56 +0000 (0:00:01.124) 0:01:54.682 ******* 2026-02-08 05:53:07.661912 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.661926 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.661940 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.661953 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.661965 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.661975 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.661986 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.661996 | orchestrator | 2026-02-08 05:53:07.662007 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-08 05:53:07.662077 | orchestrator | Sunday 08 February 2026 05:52:57 +0000 (0:00:00.802) 0:01:55.484 ******* 2026-02-08 05:53:07.662091 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.662114 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.662125 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.662136 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.662146 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.662157 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.662168 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.662178 | orchestrator | 2026-02-08 05:53:07.662189 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-08 05:53:07.662200 | orchestrator | Sunday 08 February 2026 05:52:58 +0000 (0:00:01.023) 0:01:56.508 ******* 2026-02-08 05:53:07.662211 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.662221 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.662232 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.662243 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.662253 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.662264 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.662274 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.662285 | orchestrator | 2026-02-08 05:53:07.662296 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-08 05:53:07.662307 | orchestrator | Sunday 08 February 2026 05:52:59 +0000 (0:00:00.809) 0:01:57.318 ******* 2026-02-08 05:53:07.662318 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.662328 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.662339 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.662350 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.662360 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.662371 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.662381 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.662392 | orchestrator | 2026-02-08 05:53:07.662403 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-08 05:53:07.662414 | orchestrator | Sunday 08 February 2026 05:53:00 +0000 (0:00:01.094) 0:01:58.413 ******* 2026-02-08 05:53:07.662425 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.662475 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.662487 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.662498 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.662508 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.662539 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.662551 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.662562 | orchestrator | 2026-02-08 05:53:07.662573 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-08 05:53:07.662584 | orchestrator | Sunday 08 February 2026 05:53:01 +0000 (0:00:01.007) 0:01:59.420 ******* 2026-02-08 05:53:07.662595 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.662606 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.662616 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.662627 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.662637 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.662648 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.662659 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.662670 | orchestrator | 2026-02-08 05:53:07.662680 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-08 05:53:07.662692 | orchestrator | Sunday 08 February 2026 05:53:02 +0000 (0:00:00.754) 0:02:00.175 ******* 2026-02-08 05:53:07.662703 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.662713 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.662724 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.662742 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.662753 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.662763 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.662774 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.662785 | orchestrator | 2026-02-08 05:53:07.662796 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-08 05:53:07.662814 | orchestrator | Sunday 08 February 2026 05:53:03 +0000 (0:00:00.977) 0:02:01.152 ******* 2026-02-08 05:53:07.662825 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.662836 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.662846 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.662857 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.662867 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.662878 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.662889 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.662899 | orchestrator | 2026-02-08 05:53:07.662910 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-08 05:53:07.662921 | orchestrator | Sunday 08 February 2026 05:53:03 +0000 (0:00:00.759) 0:02:01.912 ******* 2026-02-08 05:53:07.662932 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.662942 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.662956 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.662975 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 05:53:07.662996 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 05:53:07.663015 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.663034 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 05:53:07.663052 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 05:53:07.663069 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.663087 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 05:53:07.663106 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 05:53:07.663124 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.663143 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.663161 | orchestrator | 2026-02-08 05:53:07.663180 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-08 05:53:07.663199 | orchestrator | Sunday 08 February 2026 05:53:04 +0000 (0:00:01.134) 0:02:03.046 ******* 2026-02-08 05:53:07.663219 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.663237 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.663256 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.663275 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.663291 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.663308 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.663325 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.663343 | orchestrator | 2026-02-08 05:53:07.663361 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-08 05:53:07.663380 | orchestrator | Sunday 08 February 2026 05:53:05 +0000 (0:00:00.837) 0:02:03.884 ******* 2026-02-08 05:53:07.663398 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.663416 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.663463 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.663483 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.663503 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.663523 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.663542 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.663562 | orchestrator | 2026-02-08 05:53:07.663582 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-08 05:53:07.663621 | orchestrator | Sunday 08 February 2026 05:53:06 +0000 (0:00:01.078) 0:02:04.963 ******* 2026-02-08 05:53:07.663641 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:07.663660 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:07.663677 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:07.663695 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:07.663713 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:07.663732 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:07.663751 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:07.663768 | orchestrator | 2026-02-08 05:53:07.663805 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-08 05:53:16.929147 | orchestrator | Sunday 08 February 2026 05:53:07 +0000 (0:00:00.737) 0:02:05.700 ******* 2026-02-08 05:53:16.929254 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:16.929270 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:16.929282 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:16.929293 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:16.929304 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:16.929315 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:16.929326 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:16.929337 | orchestrator | 2026-02-08 05:53:16.929349 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-08 05:53:16.929360 | orchestrator | Sunday 08 February 2026 05:53:08 +0000 (0:00:01.093) 0:02:06.793 ******* 2026-02-08 05:53:16.929372 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:16.929382 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:16.929393 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:16.929404 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:16.929414 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:16.929425 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:16.929502 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:16.929517 | orchestrator | 2026-02-08 05:53:16.929529 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-08 05:53:16.929540 | orchestrator | Sunday 08 February 2026 05:53:09 +0000 (0:00:01.062) 0:02:07.856 ******* 2026-02-08 05:53:16.929551 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:16.929562 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:16.929573 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:16.929584 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:16.929594 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:16.929605 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:16.929616 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:16.929627 | orchestrator | 2026-02-08 05:53:16.929638 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-08 05:53:16.929649 | orchestrator | Sunday 08 February 2026 05:53:10 +0000 (0:00:00.793) 0:02:08.649 ******* 2026-02-08 05:53:16.929660 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:16.929670 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:16.929681 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:16.929692 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:16.929703 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:53:16.929714 | orchestrator | 2026-02-08 05:53:16.929725 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-08 05:53:16.929736 | orchestrator | Sunday 08 February 2026 05:53:12 +0000 (0:00:01.636) 0:02:10.286 ******* 2026-02-08 05:53:16.929747 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:53:16.929759 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:53:16.929770 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:53:16.929793 | orchestrator | 2026-02-08 05:53:16.929804 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-08 05:53:16.929815 | orchestrator | Sunday 08 February 2026 05:53:12 +0000 (0:00:00.378) 0:02:10.664 ******* 2026-02-08 05:53:16.929852 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 05:53:16.929866 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 05:53:16.929877 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:16.929888 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 05:53:16.929899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 05:53:16.929910 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:16.929921 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 05:53:16.929932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 05:53:16.929943 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:16.929955 | orchestrator | 2026-02-08 05:53:16.929966 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-08 05:53:16.929977 | orchestrator | Sunday 08 February 2026 05:53:12 +0000 (0:00:00.369) 0:02:11.034 ******* 2026-02-08 05:53:16.929990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:16.930004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:16.930015 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:16.930113 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:16.930125 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:16.930137 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:16.930155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:16.930176 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:16.930187 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:16.930198 | orchestrator | 2026-02-08 05:53:16.930210 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-08 05:53:16.930231 | orchestrator | Sunday 08 February 2026 05:53:13 +0000 (0:00:00.638) 0:02:11.672 ******* 2026-02-08 05:53:16.930242 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:16.930253 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:16.930264 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:16.930275 | orchestrator | 2026-02-08 05:53:16.930286 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-08 05:53:16.930297 | orchestrator | Sunday 08 February 2026 05:53:13 +0000 (0:00:00.342) 0:02:12.015 ******* 2026-02-08 05:53:16.930308 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:16.930319 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:16.930329 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:16.930340 | orchestrator | 2026-02-08 05:53:16.930411 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-08 05:53:16.930422 | orchestrator | Sunday 08 February 2026 05:53:14 +0000 (0:00:00.342) 0:02:12.358 ******* 2026-02-08 05:53:16.930433 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:16.930485 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:16.930496 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:16.930507 | orchestrator | 2026-02-08 05:53:16.930518 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-08 05:53:16.930529 | orchestrator | Sunday 08 February 2026 05:53:14 +0000 (0:00:00.334) 0:02:12.692 ******* 2026-02-08 05:53:16.930540 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:16.930550 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:16.930561 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:16.930572 | orchestrator | 2026-02-08 05:53:16.930583 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-08 05:53:16.930594 | orchestrator | Sunday 08 February 2026 05:53:14 +0000 (0:00:00.325) 0:02:13.018 ******* 2026-02-08 05:53:16.930605 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'}) 2026-02-08 05:53:16.930618 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'}) 2026-02-08 05:53:16.930629 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'}) 2026-02-08 05:53:16.930641 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'}) 2026-02-08 05:53:16.930652 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'}) 2026-02-08 05:53:16.930662 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'}) 2026-02-08 05:53:16.930673 | orchestrator | 2026-02-08 05:53:16.930684 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-08 05:53:16.930696 | orchestrator | Sunday 08 February 2026 05:53:16 +0000 (0:00:01.824) 0:02:14.843 ******* 2026-02-08 05:53:16.930729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-658e9559-2696-538a-a0a4-811fe95d0be4/osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1770522692.0160499, 'mtime': 1770522692.0110497, 'ctime': 1770522692.0110497, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-658e9559-2696-538a-a0a4-811fe95d0be4/osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:17.760964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-edf9913e-48af-595a-836b-515c584cb757/osd-block-edf9913e-48af-595a-836b-515c584cb757', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1770522712.3203542, 'mtime': 1770522712.316354, 'ctime': 1770522712.316354, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-edf9913e-48af-595a-836b-515c584cb757/osd-block-edf9913e-48af-595a-836b-515c584cb757', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:17.761106 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:17.761127 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-1f36c880-548c-5a66-856f-2c4e799d94fc/osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 949, 'dev': 6, 'nlink': 1, 'atime': 1770522693.2164295, 'mtime': 1770522693.2114294, 'ctime': 1770522693.2114294, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-1f36c880-548c-5a66-856f-2c4e799d94fc/osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:17.761160 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046/osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 959, 'dev': 6, 'nlink': 1, 'atime': 1770522713.780733, 'mtime': 1770522713.776733, 'ctime': 1770522713.776733, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046/osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:17.762099 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:17.762150 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a/osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1770522692.2169073, 'mtime': 1770522692.2109072, 'ctime': 1770522692.2109072, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a/osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:17.762165 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-b3e05e81-e469-5668-9a53-5e8f92025307/osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1770522710.010176, 'mtime': 1770522710.004176, 'ctime': 1770522710.004176, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-b3e05e81-e469-5668-9a53-5e8f92025307/osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:17.762177 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:17.762189 | orchestrator | 2026-02-08 05:53:17.762201 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-08 05:53:17.762214 | orchestrator | Sunday 08 February 2026 05:53:17 +0000 (0:00:00.441) 0:02:15.285 ******* 2026-02-08 05:53:17.762226 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 05:53:17.762238 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 05:53:17.762249 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:17.762274 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 05:53:17.762285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 05:53:17.762296 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:17.762307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 05:53:17.762318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 05:53:17.762329 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:17.762340 | orchestrator | 2026-02-08 05:53:17.762359 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-08 05:53:17.762371 | orchestrator | Sunday 08 February 2026 05:53:17 +0000 (0:00:00.399) 0:02:15.685 ******* 2026-02-08 05:53:17.762390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.899708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.899816 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:21.899834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.899847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.899859 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:21.899871 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.899883 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.899894 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:21.899906 | orchestrator | 2026-02-08 05:53:21.899919 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-08 05:53:21.899931 | orchestrator | Sunday 08 February 2026 05:53:18 +0000 (0:00:00.405) 0:02:16.090 ******* 2026-02-08 05:53:21.899944 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'})  2026-02-08 05:53:21.899958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'})  2026-02-08 05:53:21.899993 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:21.900005 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'})  2026-02-08 05:53:21.900017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'})  2026-02-08 05:53:21.900028 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:21.900039 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'})  2026-02-08 05:53:21.900050 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'})  2026-02-08 05:53:21.900061 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:21.900072 | orchestrator | 2026-02-08 05:53:21.900084 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-08 05:53:21.900096 | orchestrator | Sunday 08 February 2026 05:53:18 +0000 (0:00:00.631) 0:02:16.721 ******* 2026-02-08 05:53:21.900108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-658e9559-2696-538a-a0a4-811fe95d0be4', 'data_vg': 'ceph-658e9559-2696-538a-a0a4-811fe95d0be4'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.900135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-edf9913e-48af-595a-836b-515c584cb757', 'data_vg': 'ceph-edf9913e-48af-595a-836b-515c584cb757'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.900147 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:21.900177 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-1f36c880-548c-5a66-856f-2c4e799d94fc', 'data_vg': 'ceph-1f36c880-548c-5a66-856f-2c4e799d94fc'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.900190 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-98a4cb59-dd7a-5ec9-b94d-174a40339046', 'data_vg': 'ceph-98a4cb59-dd7a-5ec9-b94d-174a40339046'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.900201 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:21.900213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-7ad89cb8-326d-5a7d-8045-6e04c12be05a', 'data_vg': 'ceph-7ad89cb8-326d-5a7d-8045-6e04c12be05a'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.900227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-b3e05e81-e469-5668-9a53-5e8f92025307', 'data_vg': 'ceph-b3e05e81-e469-5668-9a53-5e8f92025307'}, 'ansible_loop_var': 'item'})  2026-02-08 05:53:21.900239 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:21.900252 | orchestrator | 2026-02-08 05:53:21.900265 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-08 05:53:21.900278 | orchestrator | Sunday 08 February 2026 05:53:19 +0000 (0:00:00.406) 0:02:17.127 ******* 2026-02-08 05:53:21.900292 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:21.900305 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:21.900317 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:21.900345 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:21.900364 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:21.900384 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:21.900404 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:21.900422 | orchestrator | 2026-02-08 05:53:21.900436 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-08 05:53:21.900479 | orchestrator | Sunday 08 February 2026 05:53:19 +0000 (0:00:00.735) 0:02:17.862 ******* 2026-02-08 05:53:21.900492 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:21.900505 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:21.900518 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:21.900530 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:21.900544 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 05:53:21.900557 | orchestrator | 2026-02-08 05:53:21.900568 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-08 05:53:21.900579 | orchestrator | Sunday 08 February 2026 05:53:21 +0000 (0:00:01.656) 0:02:19.519 ******* 2026-02-08 05:53:21.900590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900646 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:21.900657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900718 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:21.900730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:21.900772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550253 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.550273 | orchestrator | 2026-02-08 05:53:29.550287 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-08 05:53:29.550300 | orchestrator | Sunday 08 February 2026 05:53:21 +0000 (0:00:00.421) 0:02:19.940 ******* 2026-02-08 05:53:29.550334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550408 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:29.550426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550547 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550603 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:29.550623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550721 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.550741 | orchestrator | 2026-02-08 05:53:29.550761 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-08 05:53:29.550775 | orchestrator | Sunday 08 February 2026 05:53:22 +0000 (0:00:00.702) 0:02:20.643 ******* 2026-02-08 05:53:29.550790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550854 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:29.550880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.550997 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:29.551018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.551038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.551056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.551071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.551082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 05:53:29.551093 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.551104 | orchestrator | 2026-02-08 05:53:29.551115 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-08 05:53:29.551126 | orchestrator | Sunday 08 February 2026 05:53:23 +0000 (0:00:00.436) 0:02:21.079 ******* 2026-02-08 05:53:29.551137 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:29.551148 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:29.551159 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:29.551169 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:29.551181 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:29.551191 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.551202 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:29.551212 | orchestrator | 2026-02-08 05:53:29.551223 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-08 05:53:29.551234 | orchestrator | Sunday 08 February 2026 05:53:23 +0000 (0:00:00.730) 0:02:21.809 ******* 2026-02-08 05:53:29.551245 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:29.551256 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:29.551266 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:29.551277 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:29.551287 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:29.551305 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.551323 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:29.551341 | orchestrator | 2026-02-08 05:53:29.551361 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-08 05:53:29.551379 | orchestrator | Sunday 08 February 2026 05:53:24 +0000 (0:00:01.019) 0:02:22.829 ******* 2026-02-08 05:53:29.551399 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:29.551417 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:29.551434 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:29.551484 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:29.551502 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:29.551516 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.551526 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:29.551537 | orchestrator | 2026-02-08 05:53:29.551548 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-08 05:53:29.551560 | orchestrator | Sunday 08 February 2026 05:53:25 +0000 (0:00:00.723) 0:02:23.553 ******* 2026-02-08 05:53:29.551571 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:29.551595 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:29.551606 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:29.551616 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:29.551627 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:29.551638 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.551649 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:29.551661 | orchestrator | 2026-02-08 05:53:29.551680 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-08 05:53:29.551698 | orchestrator | Sunday 08 February 2026 05:53:26 +0000 (0:00:01.019) 0:02:24.573 ******* 2026-02-08 05:53:29.551716 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:29.551735 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:29.551753 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:29.551773 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:29.551791 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:29.551808 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.551819 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:29.551829 | orchestrator | 2026-02-08 05:53:29.551840 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-08 05:53:29.551851 | orchestrator | Sunday 08 February 2026 05:53:27 +0000 (0:00:01.024) 0:02:25.597 ******* 2026-02-08 05:53:29.551862 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:29.551872 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:29.551883 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:29.551893 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:29.551904 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:29.551914 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.551925 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:29.551935 | orchestrator | 2026-02-08 05:53:29.551946 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-08 05:53:29.551964 | orchestrator | Sunday 08 February 2026 05:53:28 +0000 (0:00:00.747) 0:02:26.345 ******* 2026-02-08 05:53:29.551975 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:29.551986 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:29.551997 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:29.552007 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:29.552018 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:29.552030 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:29.552050 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:29.552067 | orchestrator | 2026-02-08 05:53:29.552086 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-08 05:53:29.552105 | orchestrator | Sunday 08 February 2026 05:53:29 +0000 (0:00:01.112) 0:02:27.458 ******* 2026-02-08 05:53:29.552137 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:31.283230 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:31.283333 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:31.283350 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:31.283363 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:31.283377 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:31.283414 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:31.283427 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:31.283439 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:31.283526 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:31.283539 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:31.283551 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:31.283562 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:31.283573 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:31.283584 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:31.283595 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:31.283605 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:31.283616 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:31.283627 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:31.283638 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:31.283649 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:31.283660 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:31.283687 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:31.283698 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:31.283709 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:31.283739 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:31.283751 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:31.283764 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:31.283785 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:31.283799 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:31.283812 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:31.283824 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:31.283837 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:31.283850 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:31.283863 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:31.283876 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:31.283889 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:31.283901 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:31.283914 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:31.283927 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:31.283939 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:31.283951 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:31.283964 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:31.283977 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:31.283989 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:31.284002 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:31.284023 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:31.284043 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:31.284060 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:31.284087 | orchestrator | 2026-02-08 05:53:31.284108 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-08 05:53:31.284127 | orchestrator | Sunday 08 February 2026 05:53:30 +0000 (0:00:01.036) 0:02:28.494 ******* 2026-02-08 05:53:31.284143 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:31.284161 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:31.284180 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:31.284206 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:32.593857 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:32.593961 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:32.593976 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:32.593988 | orchestrator | 2026-02-08 05:53:32.594001 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-08 05:53:32.594013 | orchestrator | Sunday 08 February 2026 05:53:31 +0000 (0:00:01.121) 0:02:29.615 ******* 2026-02-08 05:53:32.594101 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:32.594114 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:32.594127 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:32.594140 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:32.594152 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:32.594165 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:32.594176 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:32.594188 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:32.594199 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:32.594210 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:32.594221 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:32.594232 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:32.594243 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:32.594254 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:32.594265 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:32.594277 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:32.594313 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:32.594325 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:32.594350 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:32.594362 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:32.594373 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:32.594384 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:32.594414 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:32.594426 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:32.594438 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:32.594475 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:32.594488 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:32.594499 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:32.594510 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:32.594522 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:32.594532 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:32.594543 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:32.594554 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:32.594565 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:32.594576 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-08 05:53:32.594586 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:32.594597 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:32.594617 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-08 05:53:32.594628 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-08 05:53:32.594639 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:32.594649 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:32.594661 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:32.594671 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:32.594688 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-08 05:53:32.594699 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:32.594710 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:32.594721 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-08 05:53:32.594740 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-08 05:53:59.601989 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:59.602147 | orchestrator | 2026-02-08 05:53:59.602164 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-08 05:53:59.602176 | orchestrator | Sunday 08 February 2026 05:53:32 +0000 (0:00:01.011) 0:02:30.627 ******* 2026-02-08 05:53:59.602186 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:59.602197 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:59.602206 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:59.602216 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:59.602226 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:59.602235 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:59.602244 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:59.602254 | orchestrator | 2026-02-08 05:53:59.602264 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-08 05:53:59.602274 | orchestrator | Sunday 08 February 2026 05:53:33 +0000 (0:00:01.036) 0:02:31.664 ******* 2026-02-08 05:53:59.602283 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:59.602293 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:59.602303 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:59.602312 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:59.602322 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:59.602332 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:59.602341 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:59.602351 | orchestrator | 2026-02-08 05:53:59.602360 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-08 05:53:59.602370 | orchestrator | Sunday 08 February 2026 05:53:34 +0000 (0:00:01.022) 0:02:32.686 ******* 2026-02-08 05:53:59.602380 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:59.602389 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:59.602419 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:59.602430 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:59.602439 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:59.602449 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:59.602458 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:59.602498 | orchestrator | 2026-02-08 05:53:59.602508 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-08 05:53:59.602518 | orchestrator | Sunday 08 February 2026 05:53:36 +0000 (0:00:01.659) 0:02:34.346 ******* 2026-02-08 05:53:59.602529 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-08 05:53:59.602543 | orchestrator | 2026-02-08 05:53:59.602555 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-08 05:53:59.602568 | orchestrator | Sunday 08 February 2026 05:53:38 +0000 (0:00:02.032) 0:02:36.378 ******* 2026-02-08 05:53:59.602585 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-08 05:53:59.602603 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-08 05:53:59.602620 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-08 05:53:59.602635 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-08 05:53:59.602651 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-08 05:53:59.602666 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-08 05:53:59.602683 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-08 05:53:59.602699 | orchestrator | 2026-02-08 05:53:59.602714 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-08 05:53:59.602729 | orchestrator | Sunday 08 February 2026 05:53:39 +0000 (0:00:00.976) 0:02:37.355 ******* 2026-02-08 05:53:59.602745 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:59.602762 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:59.602779 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:59.602796 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:59.602812 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:59.602822 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:59.602832 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:59.602842 | orchestrator | 2026-02-08 05:53:59.602851 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-08 05:53:59.602861 | orchestrator | Sunday 08 February 2026 05:53:40 +0000 (0:00:01.119) 0:02:38.474 ******* 2026-02-08 05:53:59.602871 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:59.602881 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:59.602890 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:59.602900 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:59.602910 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:59.602933 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:59.602943 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:59.602953 | orchestrator | 2026-02-08 05:53:59.602963 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-08 05:53:59.602973 | orchestrator | Sunday 08 February 2026 05:53:41 +0000 (0:00:00.824) 0:02:39.299 ******* 2026-02-08 05:53:59.602982 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:53:59.602993 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:53:59.603003 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:53:59.603012 | orchestrator | ok: [testbed-node-3] 2026-02-08 05:53:59.603022 | orchestrator | ok: [testbed-node-4] 2026-02-08 05:53:59.603031 | orchestrator | ok: [testbed-node-5] 2026-02-08 05:53:59.603041 | orchestrator | ok: [testbed-manager] 2026-02-08 05:53:59.603060 | orchestrator | 2026-02-08 05:53:59.603070 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-08 05:53:59.603093 | orchestrator | Sunday 08 February 2026 05:53:42 +0000 (0:00:01.512) 0:02:40.812 ******* 2026-02-08 05:53:59.603103 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:59.603113 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:59.603123 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:59.603136 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:59.603163 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:59.603174 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:59.603183 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:59.603193 | orchestrator | 2026-02-08 05:53:59.603203 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-08 05:53:59.603213 | orchestrator | Sunday 08 February 2026 05:53:44 +0000 (0:00:01.532) 0:02:42.344 ******* 2026-02-08 05:53:59.603230 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:59.603246 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:53:59.603263 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:53:59.603280 | orchestrator | skipping: [testbed-node-3] 2026-02-08 05:53:59.603297 | orchestrator | skipping: [testbed-node-4] 2026-02-08 05:53:59.603314 | orchestrator | skipping: [testbed-node-5] 2026-02-08 05:53:59.603331 | orchestrator | skipping: [testbed-manager] 2026-02-08 05:53:59.603347 | orchestrator | 2026-02-08 05:53:59.603358 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-08 05:53:59.603368 | orchestrator | Sunday 08 February 2026 05:53:45 +0000 (0:00:01.639) 0:02:43.984 ******* 2026-02-08 05:53:59.603378 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:53:59.603387 | orchestrator | 2026-02-08 05:53:59.603397 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-08 05:53:59.603407 | orchestrator | Sunday 08 February 2026 05:53:47 +0000 (0:00:01.699) 0:02:45.684 ******* 2026-02-08 05:53:59.603417 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:53:59.603426 | orchestrator | 2026-02-08 05:53:59.603436 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-08 05:53:59.603446 | orchestrator | 2026-02-08 05:53:59.603455 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 05:53:59.603520 | orchestrator | Sunday 08 February 2026 05:53:48 +0000 (0:00:00.810) 0:02:46.494 ******* 2026-02-08 05:53:59.603532 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:53:59.603541 | orchestrator | 2026-02-08 05:53:59.603551 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 05:53:59.603561 | orchestrator | Sunday 08 February 2026 05:53:48 +0000 (0:00:00.473) 0:02:46.967 ******* 2026-02-08 05:53:59.603570 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:53:59.603580 | orchestrator | 2026-02-08 05:53:59.603589 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-08 05:53:59.603599 | orchestrator | Sunday 08 February 2026 05:53:49 +0000 (0:00:00.560) 0:02:47.528 ******* 2026-02-08 05:53:59.603610 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-08 05:53:59.603623 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-08 05:53:59.603633 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-08 05:53:59.603651 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-08 05:53:59.603670 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-08 05:53:59.603681 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}])  2026-02-08 05:53:59.603692 | orchestrator | 2026-02-08 05:53:59.603702 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-08 05:53:59.603712 | orchestrator | 2026-02-08 05:53:59.603722 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-08 05:53:59.603741 | orchestrator | Sunday 08 February 2026 05:53:59 +0000 (0:00:10.104) 0:02:57.633 ******* 2026-02-08 05:54:07.656168 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656281 | orchestrator | 2026-02-08 05:54:07.656300 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-08 05:54:07.656313 | orchestrator | Sunday 08 February 2026 05:54:00 +0000 (0:00:00.527) 0:02:58.161 ******* 2026-02-08 05:54:07.656325 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656336 | orchestrator | 2026-02-08 05:54:07.656348 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-08 05:54:07.656359 | orchestrator | Sunday 08 February 2026 05:54:00 +0000 (0:00:00.156) 0:02:58.317 ******* 2026-02-08 05:54:07.656371 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:07.656384 | orchestrator | 2026-02-08 05:54:07.656395 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-08 05:54:07.656406 | orchestrator | Sunday 08 February 2026 05:54:00 +0000 (0:00:00.153) 0:02:58.470 ******* 2026-02-08 05:54:07.656417 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656428 | orchestrator | 2026-02-08 05:54:07.656440 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 05:54:07.656451 | orchestrator | Sunday 08 February 2026 05:54:00 +0000 (0:00:00.167) 0:02:58.638 ******* 2026-02-08 05:54:07.656462 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-08 05:54:07.656545 | orchestrator | 2026-02-08 05:54:07.656558 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 05:54:07.656570 | orchestrator | Sunday 08 February 2026 05:54:00 +0000 (0:00:00.243) 0:02:58.882 ******* 2026-02-08 05:54:07.656581 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656592 | orchestrator | 2026-02-08 05:54:07.656603 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 05:54:07.656614 | orchestrator | Sunday 08 February 2026 05:54:01 +0000 (0:00:00.579) 0:02:59.461 ******* 2026-02-08 05:54:07.656626 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656637 | orchestrator | 2026-02-08 05:54:07.656648 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 05:54:07.656659 | orchestrator | Sunday 08 February 2026 05:54:01 +0000 (0:00:00.128) 0:02:59.589 ******* 2026-02-08 05:54:07.656692 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656706 | orchestrator | 2026-02-08 05:54:07.656719 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 05:54:07.656732 | orchestrator | Sunday 08 February 2026 05:54:02 +0000 (0:00:00.467) 0:03:00.057 ******* 2026-02-08 05:54:07.656746 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656759 | orchestrator | 2026-02-08 05:54:07.656772 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 05:54:07.656784 | orchestrator | Sunday 08 February 2026 05:54:02 +0000 (0:00:00.411) 0:03:00.468 ******* 2026-02-08 05:54:07.656798 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656811 | orchestrator | 2026-02-08 05:54:07.656824 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 05:54:07.656837 | orchestrator | Sunday 08 February 2026 05:54:02 +0000 (0:00:00.144) 0:03:00.613 ******* 2026-02-08 05:54:07.656850 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656862 | orchestrator | 2026-02-08 05:54:07.656873 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 05:54:07.656884 | orchestrator | Sunday 08 February 2026 05:54:02 +0000 (0:00:00.167) 0:03:00.781 ******* 2026-02-08 05:54:07.656895 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:07.656906 | orchestrator | 2026-02-08 05:54:07.656917 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 05:54:07.656928 | orchestrator | Sunday 08 February 2026 05:54:02 +0000 (0:00:00.164) 0:03:00.945 ******* 2026-02-08 05:54:07.656939 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.656950 | orchestrator | 2026-02-08 05:54:07.656961 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 05:54:07.656972 | orchestrator | Sunday 08 February 2026 05:54:03 +0000 (0:00:00.140) 0:03:01.085 ******* 2026-02-08 05:54:07.656983 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:54:07.656994 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:54:07.657006 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:54:07.657016 | orchestrator | 2026-02-08 05:54:07.657027 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 05:54:07.657038 | orchestrator | Sunday 08 February 2026 05:54:03 +0000 (0:00:00.663) 0:03:01.749 ******* 2026-02-08 05:54:07.657049 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:07.657060 | orchestrator | 2026-02-08 05:54:07.657071 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 05:54:07.657082 | orchestrator | Sunday 08 February 2026 05:54:03 +0000 (0:00:00.260) 0:03:02.009 ******* 2026-02-08 05:54:07.657093 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:54:07.657118 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:54:07.657130 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:54:07.657140 | orchestrator | 2026-02-08 05:54:07.657151 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 05:54:07.657162 | orchestrator | Sunday 08 February 2026 05:54:05 +0000 (0:00:01.894) 0:03:03.904 ******* 2026-02-08 05:54:07.657173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 05:54:07.657184 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 05:54:07.657195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 05:54:07.657206 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:07.657217 | orchestrator | 2026-02-08 05:54:07.657228 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 05:54:07.657239 | orchestrator | Sunday 08 February 2026 05:54:06 +0000 (0:00:00.427) 0:03:04.332 ******* 2026-02-08 05:54:07.657266 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 05:54:07.657290 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 05:54:07.657302 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 05:54:07.657313 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:07.657325 | orchestrator | 2026-02-08 05:54:07.657336 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 05:54:07.657347 | orchestrator | Sunday 08 February 2026 05:54:07 +0000 (0:00:00.991) 0:03:05.323 ******* 2026-02-08 05:54:07.657360 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:07.657375 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:07.657387 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:07.657398 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:07.657409 | orchestrator | 2026-02-08 05:54:07.657420 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 05:54:07.657431 | orchestrator | Sunday 08 February 2026 05:54:07 +0000 (0:00:00.169) 0:03:05.493 ******* 2026-02-08 05:54:07.657445 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '814c3ba0cfa5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 05:54:04.504109', 'end': '2026-02-08 05:54:04.569659', 'delta': '0:00:00.065550', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['814c3ba0cfa5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 05:54:07.657465 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd108d94fad94', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 05:54:05.067285', 'end': '2026-02-08 05:54:05.110629', 'delta': '0:00:00.043344', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d108d94fad94'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 05:54:07.657525 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '83b6b87b68f7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 05:54:05.641753', 'end': '2026-02-08 05:54:05.707098', 'delta': '0:00:00.065345', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['83b6b87b68f7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 05:54:12.129995 | orchestrator | 2026-02-08 05:54:12.130210 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 05:54:12.130233 | orchestrator | Sunday 08 February 2026 05:54:07 +0000 (0:00:00.204) 0:03:05.697 ******* 2026-02-08 05:54:12.130249 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:12.130263 | orchestrator | 2026-02-08 05:54:12.130277 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 05:54:12.130291 | orchestrator | Sunday 08 February 2026 05:54:07 +0000 (0:00:00.275) 0:03:05.973 ******* 2026-02-08 05:54:12.130304 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.130319 | orchestrator | 2026-02-08 05:54:12.130333 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 05:54:12.130347 | orchestrator | Sunday 08 February 2026 05:54:08 +0000 (0:00:00.884) 0:03:06.857 ******* 2026-02-08 05:54:12.130361 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:12.130400 | orchestrator | 2026-02-08 05:54:12.130410 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 05:54:12.130419 | orchestrator | Sunday 08 February 2026 05:54:08 +0000 (0:00:00.170) 0:03:07.028 ******* 2026-02-08 05:54:12.130427 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-08 05:54:12.130435 | orchestrator | 2026-02-08 05:54:12.130443 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 05:54:12.130451 | orchestrator | Sunday 08 February 2026 05:54:10 +0000 (0:00:01.067) 0:03:08.095 ******* 2026-02-08 05:54:12.130459 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:12.130467 | orchestrator | 2026-02-08 05:54:12.130504 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 05:54:12.130513 | orchestrator | Sunday 08 February 2026 05:54:10 +0000 (0:00:00.154) 0:03:08.250 ******* 2026-02-08 05:54:12.130522 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.130531 | orchestrator | 2026-02-08 05:54:12.130541 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 05:54:12.130551 | orchestrator | Sunday 08 February 2026 05:54:10 +0000 (0:00:00.129) 0:03:08.380 ******* 2026-02-08 05:54:12.130561 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.130570 | orchestrator | 2026-02-08 05:54:12.130580 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 05:54:12.130625 | orchestrator | Sunday 08 February 2026 05:54:10 +0000 (0:00:00.250) 0:03:08.630 ******* 2026-02-08 05:54:12.130636 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.130645 | orchestrator | 2026-02-08 05:54:12.130655 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 05:54:12.130664 | orchestrator | Sunday 08 February 2026 05:54:10 +0000 (0:00:00.139) 0:03:08.769 ******* 2026-02-08 05:54:12.130749 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.130761 | orchestrator | 2026-02-08 05:54:12.130771 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 05:54:12.130780 | orchestrator | Sunday 08 February 2026 05:54:10 +0000 (0:00:00.135) 0:03:08.905 ******* 2026-02-08 05:54:12.130790 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.130832 | orchestrator | 2026-02-08 05:54:12.130846 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 05:54:12.130858 | orchestrator | Sunday 08 February 2026 05:54:10 +0000 (0:00:00.141) 0:03:09.047 ******* 2026-02-08 05:54:12.130870 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.130884 | orchestrator | 2026-02-08 05:54:12.130896 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 05:54:12.130909 | orchestrator | Sunday 08 February 2026 05:54:11 +0000 (0:00:00.154) 0:03:09.201 ******* 2026-02-08 05:54:12.130922 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.130935 | orchestrator | 2026-02-08 05:54:12.130947 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 05:54:12.130960 | orchestrator | Sunday 08 February 2026 05:54:11 +0000 (0:00:00.142) 0:03:09.344 ******* 2026-02-08 05:54:12.130973 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.130986 | orchestrator | 2026-02-08 05:54:12.130999 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 05:54:12.131013 | orchestrator | Sunday 08 February 2026 05:54:11 +0000 (0:00:00.146) 0:03:09.490 ******* 2026-02-08 05:54:12.131044 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.131058 | orchestrator | 2026-02-08 05:54:12.131072 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 05:54:12.131083 | orchestrator | Sunday 08 February 2026 05:54:11 +0000 (0:00:00.143) 0:03:09.634 ******* 2026-02-08 05:54:12.131099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:54:12.131116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:54:12.131157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:54:12.131176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:54:12.131193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:54:12.131207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:54:12.131234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:54:12.131270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e566a5b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:54:12.362731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:54:12.362822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:54:12.362835 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:12.362847 | orchestrator | 2026-02-08 05:54:12.362857 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 05:54:12.362867 | orchestrator | Sunday 08 February 2026 05:54:12 +0000 (0:00:00.531) 0:03:10.165 ******* 2026-02-08 05:54:12.362898 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:12.362911 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:12.362933 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:12.362944 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:12.362970 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:12.362980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:12.362996 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:12.363013 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e566a5b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:12.363031 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:41.794207 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:54:41.794383 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.794418 | orchestrator | 2026-02-08 05:54:41.794439 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 05:54:41.794458 | orchestrator | Sunday 08 February 2026 05:54:12 +0000 (0:00:00.233) 0:03:10.399 ******* 2026-02-08 05:54:41.794469 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:41.794481 | orchestrator | 2026-02-08 05:54:41.794592 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 05:54:41.794606 | orchestrator | Sunday 08 February 2026 05:54:12 +0000 (0:00:00.556) 0:03:10.956 ******* 2026-02-08 05:54:41.794617 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:41.794628 | orchestrator | 2026-02-08 05:54:41.794639 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 05:54:41.794651 | orchestrator | Sunday 08 February 2026 05:54:13 +0000 (0:00:00.154) 0:03:11.110 ******* 2026-02-08 05:54:41.794661 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:41.794672 | orchestrator | 2026-02-08 05:54:41.794683 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 05:54:41.794694 | orchestrator | Sunday 08 February 2026 05:54:13 +0000 (0:00:00.482) 0:03:11.592 ******* 2026-02-08 05:54:41.794705 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.794717 | orchestrator | 2026-02-08 05:54:41.794728 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 05:54:41.794739 | orchestrator | Sunday 08 February 2026 05:54:13 +0000 (0:00:00.148) 0:03:11.741 ******* 2026-02-08 05:54:41.794750 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.794761 | orchestrator | 2026-02-08 05:54:41.794772 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 05:54:41.794783 | orchestrator | Sunday 08 February 2026 05:54:13 +0000 (0:00:00.246) 0:03:11.987 ******* 2026-02-08 05:54:41.794794 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.794806 | orchestrator | 2026-02-08 05:54:41.794816 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 05:54:41.794828 | orchestrator | Sunday 08 February 2026 05:54:14 +0000 (0:00:00.150) 0:03:12.138 ******* 2026-02-08 05:54:41.794839 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:54:41.794850 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-08 05:54:41.794862 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-08 05:54:41.794873 | orchestrator | 2026-02-08 05:54:41.794884 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 05:54:41.794895 | orchestrator | Sunday 08 February 2026 05:54:15 +0000 (0:00:00.992) 0:03:13.130 ******* 2026-02-08 05:54:41.794906 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 05:54:41.794933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 05:54:41.794945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 05:54:41.794956 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.794968 | orchestrator | 2026-02-08 05:54:41.794979 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 05:54:41.794990 | orchestrator | Sunday 08 February 2026 05:54:15 +0000 (0:00:00.166) 0:03:13.297 ******* 2026-02-08 05:54:41.795001 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.795012 | orchestrator | 2026-02-08 05:54:41.795023 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 05:54:41.795034 | orchestrator | Sunday 08 February 2026 05:54:15 +0000 (0:00:00.187) 0:03:13.484 ******* 2026-02-08 05:54:41.795045 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:54:41.795056 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:54:41.795068 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:54:41.795090 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 05:54:41.795101 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 05:54:41.795112 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 05:54:41.795123 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 05:54:41.795134 | orchestrator | 2026-02-08 05:54:41.795145 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 05:54:41.795157 | orchestrator | Sunday 08 February 2026 05:54:16 +0000 (0:00:01.106) 0:03:14.591 ******* 2026-02-08 05:54:41.795168 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:54:41.795179 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:54:41.795190 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:54:41.795201 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 05:54:41.795233 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 05:54:41.795245 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 05:54:41.795256 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 05:54:41.795267 | orchestrator | 2026-02-08 05:54:41.795278 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-08 05:54:41.795304 | orchestrator | Sunday 08 February 2026 05:54:18 +0000 (0:00:02.019) 0:03:16.610 ******* 2026-02-08 05:54:41.795326 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-08 05:54:41.795337 | orchestrator | 2026-02-08 05:54:41.795348 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-08 05:54:41.795359 | orchestrator | Sunday 08 February 2026 05:54:19 +0000 (0:00:01.272) 0:03:17.882 ******* 2026-02-08 05:54:41.795370 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.795381 | orchestrator | 2026-02-08 05:54:41.795392 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-08 05:54:41.795403 | orchestrator | Sunday 08 February 2026 05:54:20 +0000 (0:00:00.215) 0:03:18.098 ******* 2026-02-08 05:54:41.795414 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.795425 | orchestrator | 2026-02-08 05:54:41.795436 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-08 05:54:41.795447 | orchestrator | Sunday 08 February 2026 05:54:20 +0000 (0:00:00.145) 0:03:18.244 ******* 2026-02-08 05:54:41.795458 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-08 05:54:41.795469 | orchestrator | 2026-02-08 05:54:41.795480 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-08 05:54:41.795515 | orchestrator | Sunday 08 February 2026 05:54:21 +0000 (0:00:01.267) 0:03:19.511 ******* 2026-02-08 05:54:41.795531 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.795542 | orchestrator | 2026-02-08 05:54:41.795553 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-08 05:54:41.795564 | orchestrator | Sunday 08 February 2026 05:54:21 +0000 (0:00:00.149) 0:03:19.660 ******* 2026-02-08 05:54:41.795575 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:54:41.795586 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:54:41.795596 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:54:41.795607 | orchestrator | 2026-02-08 05:54:41.795618 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-08 05:54:41.795629 | orchestrator | Sunday 08 February 2026 05:54:23 +0000 (0:00:01.485) 0:03:21.145 ******* 2026-02-08 05:54:41.795640 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-08 05:54:41.795662 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-08 05:54:41.795675 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-08 05:54:41.795685 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-08 05:54:41.795696 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-08 05:54:41.795708 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-08 05:54:41.795719 | orchestrator | 2026-02-08 05:54:41.795735 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-08 05:54:41.795746 | orchestrator | Sunday 08 February 2026 05:54:34 +0000 (0:00:11.845) 0:03:32.991 ******* 2026-02-08 05:54:41.795757 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:54:41.795768 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:54:41.795779 | orchestrator | 2026-02-08 05:54:41.795790 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-08 05:54:41.795801 | orchestrator | Sunday 08 February 2026 05:54:37 +0000 (0:00:03.006) 0:03:35.997 ******* 2026-02-08 05:54:41.795811 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:54:41.795824 | orchestrator | 2026-02-08 05:54:41.795843 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 05:54:41.795858 | orchestrator | Sunday 08 February 2026 05:54:39 +0000 (0:00:01.511) 0:03:37.509 ******* 2026-02-08 05:54:41.795873 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-08 05:54:41.795898 | orchestrator | 2026-02-08 05:54:41.795919 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 05:54:41.795937 | orchestrator | Sunday 08 February 2026 05:54:40 +0000 (0:00:00.569) 0:03:38.079 ******* 2026-02-08 05:54:41.795954 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-08 05:54:41.795972 | orchestrator | 2026-02-08 05:54:41.795991 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 05:54:41.796009 | orchestrator | Sunday 08 February 2026 05:54:40 +0000 (0:00:00.886) 0:03:38.965 ******* 2026-02-08 05:54:41.796027 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:41.796043 | orchestrator | 2026-02-08 05:54:41.796054 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 05:54:41.796065 | orchestrator | Sunday 08 February 2026 05:54:41 +0000 (0:00:00.582) 0:03:39.548 ******* 2026-02-08 05:54:41.796076 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.796087 | orchestrator | 2026-02-08 05:54:41.796098 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 05:54:41.796108 | orchestrator | Sunday 08 February 2026 05:54:41 +0000 (0:00:00.139) 0:03:39.687 ******* 2026-02-08 05:54:41.796119 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:41.796130 | orchestrator | 2026-02-08 05:54:41.796151 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 05:54:55.036063 | orchestrator | Sunday 08 February 2026 05:54:41 +0000 (0:00:00.143) 0:03:39.831 ******* 2026-02-08 05:54:55.036168 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036184 | orchestrator | 2026-02-08 05:54:55.036196 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 05:54:55.036206 | orchestrator | Sunday 08 February 2026 05:54:41 +0000 (0:00:00.141) 0:03:39.972 ******* 2026-02-08 05:54:55.036216 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.036227 | orchestrator | 2026-02-08 05:54:55.036237 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 05:54:55.036247 | orchestrator | Sunday 08 February 2026 05:54:42 +0000 (0:00:00.554) 0:03:40.526 ******* 2026-02-08 05:54:55.036257 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036287 | orchestrator | 2026-02-08 05:54:55.036297 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 05:54:55.036307 | orchestrator | Sunday 08 February 2026 05:54:42 +0000 (0:00:00.143) 0:03:40.670 ******* 2026-02-08 05:54:55.036316 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036326 | orchestrator | 2026-02-08 05:54:55.036336 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 05:54:55.036346 | orchestrator | Sunday 08 February 2026 05:54:42 +0000 (0:00:00.143) 0:03:40.814 ******* 2026-02-08 05:54:55.036355 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.036364 | orchestrator | 2026-02-08 05:54:55.036374 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 05:54:55.036384 | orchestrator | Sunday 08 February 2026 05:54:43 +0000 (0:00:00.609) 0:03:41.423 ******* 2026-02-08 05:54:55.036393 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.036403 | orchestrator | 2026-02-08 05:54:55.036412 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 05:54:55.036422 | orchestrator | Sunday 08 February 2026 05:54:43 +0000 (0:00:00.573) 0:03:41.996 ******* 2026-02-08 05:54:55.036431 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036440 | orchestrator | 2026-02-08 05:54:55.036450 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 05:54:55.036459 | orchestrator | Sunday 08 February 2026 05:54:44 +0000 (0:00:00.142) 0:03:42.139 ******* 2026-02-08 05:54:55.036469 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.036478 | orchestrator | 2026-02-08 05:54:55.036488 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 05:54:55.036544 | orchestrator | Sunday 08 February 2026 05:54:44 +0000 (0:00:00.155) 0:03:42.294 ******* 2026-02-08 05:54:55.036554 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036563 | orchestrator | 2026-02-08 05:54:55.036573 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 05:54:55.036583 | orchestrator | Sunday 08 February 2026 05:54:44 +0000 (0:00:00.140) 0:03:42.435 ******* 2026-02-08 05:54:55.036593 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036602 | orchestrator | 2026-02-08 05:54:55.036614 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 05:54:55.036626 | orchestrator | Sunday 08 February 2026 05:54:44 +0000 (0:00:00.141) 0:03:42.576 ******* 2026-02-08 05:54:55.036637 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036648 | orchestrator | 2026-02-08 05:54:55.036660 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 05:54:55.036671 | orchestrator | Sunday 08 February 2026 05:54:44 +0000 (0:00:00.429) 0:03:43.006 ******* 2026-02-08 05:54:55.036683 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036694 | orchestrator | 2026-02-08 05:54:55.036706 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 05:54:55.036731 | orchestrator | Sunday 08 February 2026 05:54:45 +0000 (0:00:00.141) 0:03:43.148 ******* 2026-02-08 05:54:55.036743 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036754 | orchestrator | 2026-02-08 05:54:55.036765 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 05:54:55.036776 | orchestrator | Sunday 08 February 2026 05:54:45 +0000 (0:00:00.165) 0:03:43.313 ******* 2026-02-08 05:54:55.036787 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.036799 | orchestrator | 2026-02-08 05:54:55.036810 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 05:54:55.036821 | orchestrator | Sunday 08 February 2026 05:54:45 +0000 (0:00:00.168) 0:03:43.482 ******* 2026-02-08 05:54:55.036832 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.036843 | orchestrator | 2026-02-08 05:54:55.036855 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 05:54:55.036864 | orchestrator | Sunday 08 February 2026 05:54:45 +0000 (0:00:00.158) 0:03:43.640 ******* 2026-02-08 05:54:55.036873 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.036892 | orchestrator | 2026-02-08 05:54:55.036901 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 05:54:55.036911 | orchestrator | Sunday 08 February 2026 05:54:45 +0000 (0:00:00.250) 0:03:43.890 ******* 2026-02-08 05:54:55.036921 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036930 | orchestrator | 2026-02-08 05:54:55.036940 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 05:54:55.036949 | orchestrator | Sunday 08 February 2026 05:54:45 +0000 (0:00:00.147) 0:03:44.038 ******* 2026-02-08 05:54:55.036959 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.036968 | orchestrator | 2026-02-08 05:54:55.036978 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 05:54:55.036987 | orchestrator | Sunday 08 February 2026 05:54:46 +0000 (0:00:00.135) 0:03:44.174 ******* 2026-02-08 05:54:55.036997 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037007 | orchestrator | 2026-02-08 05:54:55.037016 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 05:54:55.037026 | orchestrator | Sunday 08 February 2026 05:54:46 +0000 (0:00:00.155) 0:03:44.330 ******* 2026-02-08 05:54:55.037035 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037045 | orchestrator | 2026-02-08 05:54:55.037054 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 05:54:55.037064 | orchestrator | Sunday 08 February 2026 05:54:46 +0000 (0:00:00.116) 0:03:44.447 ******* 2026-02-08 05:54:55.037089 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037099 | orchestrator | 2026-02-08 05:54:55.037109 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 05:54:55.037118 | orchestrator | Sunday 08 February 2026 05:54:46 +0000 (0:00:00.120) 0:03:44.567 ******* 2026-02-08 05:54:55.037128 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037138 | orchestrator | 2026-02-08 05:54:55.037148 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 05:54:55.037157 | orchestrator | Sunday 08 February 2026 05:54:46 +0000 (0:00:00.145) 0:03:44.713 ******* 2026-02-08 05:54:55.037167 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037176 | orchestrator | 2026-02-08 05:54:55.037186 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 05:54:55.037196 | orchestrator | Sunday 08 February 2026 05:54:47 +0000 (0:00:00.420) 0:03:45.133 ******* 2026-02-08 05:54:55.037206 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037215 | orchestrator | 2026-02-08 05:54:55.037225 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 05:54:55.037235 | orchestrator | Sunday 08 February 2026 05:54:47 +0000 (0:00:00.147) 0:03:45.280 ******* 2026-02-08 05:54:55.037245 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037254 | orchestrator | 2026-02-08 05:54:55.037264 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 05:54:55.037273 | orchestrator | Sunday 08 February 2026 05:54:47 +0000 (0:00:00.129) 0:03:45.410 ******* 2026-02-08 05:54:55.037283 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037292 | orchestrator | 2026-02-08 05:54:55.037302 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 05:54:55.037311 | orchestrator | Sunday 08 February 2026 05:54:47 +0000 (0:00:00.124) 0:03:45.535 ******* 2026-02-08 05:54:55.037321 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037330 | orchestrator | 2026-02-08 05:54:55.037340 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 05:54:55.037350 | orchestrator | Sunday 08 February 2026 05:54:47 +0000 (0:00:00.135) 0:03:45.670 ******* 2026-02-08 05:54:55.037359 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037369 | orchestrator | 2026-02-08 05:54:55.037378 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 05:54:55.037388 | orchestrator | Sunday 08 February 2026 05:54:47 +0000 (0:00:00.210) 0:03:45.881 ******* 2026-02-08 05:54:55.037404 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.037414 | orchestrator | 2026-02-08 05:54:55.037424 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 05:54:55.037433 | orchestrator | Sunday 08 February 2026 05:54:48 +0000 (0:00:01.069) 0:03:46.950 ******* 2026-02-08 05:54:55.037443 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.037452 | orchestrator | 2026-02-08 05:54:55.037462 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 05:54:55.037471 | orchestrator | Sunday 08 February 2026 05:54:50 +0000 (0:00:01.470) 0:03:48.420 ******* 2026-02-08 05:54:55.037481 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-08 05:54:55.037491 | orchestrator | 2026-02-08 05:54:55.037521 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 05:54:55.037530 | orchestrator | Sunday 08 February 2026 05:54:50 +0000 (0:00:00.601) 0:03:49.022 ******* 2026-02-08 05:54:55.037540 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037550 | orchestrator | 2026-02-08 05:54:55.037560 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 05:54:55.037574 | orchestrator | Sunday 08 February 2026 05:54:51 +0000 (0:00:00.157) 0:03:49.180 ******* 2026-02-08 05:54:55.037584 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037594 | orchestrator | 2026-02-08 05:54:55.037604 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 05:54:55.037613 | orchestrator | Sunday 08 February 2026 05:54:51 +0000 (0:00:00.158) 0:03:49.338 ******* 2026-02-08 05:54:55.037623 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 05:54:55.037632 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 05:54:55.037642 | orchestrator | 2026-02-08 05:54:55.037652 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 05:54:55.037661 | orchestrator | Sunday 08 February 2026 05:54:52 +0000 (0:00:01.180) 0:03:50.519 ******* 2026-02-08 05:54:55.037671 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.037681 | orchestrator | 2026-02-08 05:54:55.037690 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 05:54:55.037700 | orchestrator | Sunday 08 February 2026 05:54:53 +0000 (0:00:00.705) 0:03:51.225 ******* 2026-02-08 05:54:55.037710 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037719 | orchestrator | 2026-02-08 05:54:55.037729 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 05:54:55.037738 | orchestrator | Sunday 08 February 2026 05:54:53 +0000 (0:00:00.142) 0:03:51.367 ******* 2026-02-08 05:54:55.037748 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037758 | orchestrator | 2026-02-08 05:54:55.037767 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 05:54:55.037777 | orchestrator | Sunday 08 February 2026 05:54:53 +0000 (0:00:00.148) 0:03:51.516 ******* 2026-02-08 05:54:55.037786 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:54:55.037796 | orchestrator | 2026-02-08 05:54:55.037806 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 05:54:55.037815 | orchestrator | Sunday 08 February 2026 05:54:53 +0000 (0:00:00.171) 0:03:51.688 ******* 2026-02-08 05:54:55.037825 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-08 05:54:55.037834 | orchestrator | 2026-02-08 05:54:55.037844 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 05:54:55.037854 | orchestrator | Sunday 08 February 2026 05:54:54 +0000 (0:00:00.641) 0:03:52.330 ******* 2026-02-08 05:54:55.037863 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:54:55.037873 | orchestrator | 2026-02-08 05:54:55.037889 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 05:55:09.537436 | orchestrator | Sunday 08 February 2026 05:54:55 +0000 (0:00:00.741) 0:03:53.071 ******* 2026-02-08 05:55:09.537671 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 05:55:09.537693 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 05:55:09.537705 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 05:55:09.537717 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.537730 | orchestrator | 2026-02-08 05:55:09.537742 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 05:55:09.537754 | orchestrator | Sunday 08 February 2026 05:54:55 +0000 (0:00:00.151) 0:03:53.223 ******* 2026-02-08 05:55:09.537764 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.537775 | orchestrator | 2026-02-08 05:55:09.537851 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 05:55:09.537871 | orchestrator | Sunday 08 February 2026 05:54:55 +0000 (0:00:00.146) 0:03:53.369 ******* 2026-02-08 05:55:09.537887 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.537913 | orchestrator | 2026-02-08 05:55:09.537935 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 05:55:09.537952 | orchestrator | Sunday 08 February 2026 05:54:55 +0000 (0:00:00.195) 0:03:53.565 ******* 2026-02-08 05:55:09.537970 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.537989 | orchestrator | 2026-02-08 05:55:09.538009 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 05:55:09.538096 | orchestrator | Sunday 08 February 2026 05:54:55 +0000 (0:00:00.151) 0:03:53.717 ******* 2026-02-08 05:55:09.538111 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538124 | orchestrator | 2026-02-08 05:55:09.538137 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 05:55:09.538151 | orchestrator | Sunday 08 February 2026 05:54:55 +0000 (0:00:00.154) 0:03:53.871 ******* 2026-02-08 05:55:09.538164 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538177 | orchestrator | 2026-02-08 05:55:09.538190 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 05:55:09.538202 | orchestrator | Sunday 08 February 2026 05:54:56 +0000 (0:00:00.432) 0:03:54.304 ******* 2026-02-08 05:55:09.538216 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:09.538230 | orchestrator | 2026-02-08 05:55:09.538243 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 05:55:09.538256 | orchestrator | Sunday 08 February 2026 05:54:57 +0000 (0:00:01.695) 0:03:56.000 ******* 2026-02-08 05:55:09.538270 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:09.538283 | orchestrator | 2026-02-08 05:55:09.538294 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 05:55:09.538304 | orchestrator | Sunday 08 February 2026 05:54:58 +0000 (0:00:00.134) 0:03:56.134 ******* 2026-02-08 05:55:09.538315 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-08 05:55:09.538326 | orchestrator | 2026-02-08 05:55:09.538337 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 05:55:09.538348 | orchestrator | Sunday 08 February 2026 05:54:58 +0000 (0:00:00.592) 0:03:56.727 ******* 2026-02-08 05:55:09.538358 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538369 | orchestrator | 2026-02-08 05:55:09.538380 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 05:55:09.538405 | orchestrator | Sunday 08 February 2026 05:54:58 +0000 (0:00:00.162) 0:03:56.889 ******* 2026-02-08 05:55:09.538416 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538427 | orchestrator | 2026-02-08 05:55:09.538438 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 05:55:09.538448 | orchestrator | Sunday 08 February 2026 05:54:59 +0000 (0:00:00.212) 0:03:57.101 ******* 2026-02-08 05:55:09.538460 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538471 | orchestrator | 2026-02-08 05:55:09.538482 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 05:55:09.538493 | orchestrator | Sunday 08 February 2026 05:54:59 +0000 (0:00:00.156) 0:03:57.258 ******* 2026-02-08 05:55:09.538539 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538551 | orchestrator | 2026-02-08 05:55:09.538561 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 05:55:09.538572 | orchestrator | Sunday 08 February 2026 05:54:59 +0000 (0:00:00.171) 0:03:57.429 ******* 2026-02-08 05:55:09.538583 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538594 | orchestrator | 2026-02-08 05:55:09.538605 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 05:55:09.538615 | orchestrator | Sunday 08 February 2026 05:54:59 +0000 (0:00:00.170) 0:03:57.600 ******* 2026-02-08 05:55:09.538626 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538637 | orchestrator | 2026-02-08 05:55:09.538648 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 05:55:09.538658 | orchestrator | Sunday 08 February 2026 05:54:59 +0000 (0:00:00.155) 0:03:57.756 ******* 2026-02-08 05:55:09.538669 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538680 | orchestrator | 2026-02-08 05:55:09.538691 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 05:55:09.538702 | orchestrator | Sunday 08 February 2026 05:54:59 +0000 (0:00:00.153) 0:03:57.909 ******* 2026-02-08 05:55:09.538712 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.538723 | orchestrator | 2026-02-08 05:55:09.538734 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 05:55:09.538745 | orchestrator | Sunday 08 February 2026 05:55:00 +0000 (0:00:00.149) 0:03:58.059 ******* 2026-02-08 05:55:09.538756 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:09.538766 | orchestrator | 2026-02-08 05:55:09.538777 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 05:55:09.538788 | orchestrator | Sunday 08 February 2026 05:55:00 +0000 (0:00:00.539) 0:03:58.599 ******* 2026-02-08 05:55:09.538799 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-08 05:55:09.538811 | orchestrator | 2026-02-08 05:55:09.538843 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 05:55:09.538855 | orchestrator | Sunday 08 February 2026 05:55:01 +0000 (0:00:00.581) 0:03:59.180 ******* 2026-02-08 05:55:09.538866 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-08 05:55:09.538878 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-08 05:55:09.538889 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-08 05:55:09.538899 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-08 05:55:09.538910 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-08 05:55:09.538921 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-08 05:55:09.538931 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-08 05:55:09.538942 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-08 05:55:09.538953 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 05:55:09.538964 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 05:55:09.538975 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 05:55:09.538986 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 05:55:09.538996 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 05:55:09.539007 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 05:55:09.539018 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-08 05:55:09.539029 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-08 05:55:09.539040 | orchestrator | 2026-02-08 05:55:09.539050 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 05:55:09.539061 | orchestrator | Sunday 08 February 2026 05:55:07 +0000 (0:00:06.037) 0:04:05.218 ******* 2026-02-08 05:55:09.539082 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539093 | orchestrator | 2026-02-08 05:55:09.539105 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 05:55:09.539115 | orchestrator | Sunday 08 February 2026 05:55:07 +0000 (0:00:00.138) 0:04:05.357 ******* 2026-02-08 05:55:09.539126 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539137 | orchestrator | 2026-02-08 05:55:09.539148 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 05:55:09.539159 | orchestrator | Sunday 08 February 2026 05:55:07 +0000 (0:00:00.137) 0:04:05.494 ******* 2026-02-08 05:55:09.539169 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539180 | orchestrator | 2026-02-08 05:55:09.539191 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 05:55:09.539202 | orchestrator | Sunday 08 February 2026 05:55:07 +0000 (0:00:00.127) 0:04:05.621 ******* 2026-02-08 05:55:09.539213 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539223 | orchestrator | 2026-02-08 05:55:09.539234 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 05:55:09.539244 | orchestrator | Sunday 08 February 2026 05:55:07 +0000 (0:00:00.131) 0:04:05.753 ******* 2026-02-08 05:55:09.539255 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539266 | orchestrator | 2026-02-08 05:55:09.539277 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 05:55:09.539288 | orchestrator | Sunday 08 February 2026 05:55:07 +0000 (0:00:00.130) 0:04:05.883 ******* 2026-02-08 05:55:09.539304 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539315 | orchestrator | 2026-02-08 05:55:09.539326 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 05:55:09.539337 | orchestrator | Sunday 08 February 2026 05:55:07 +0000 (0:00:00.138) 0:04:06.022 ******* 2026-02-08 05:55:09.539348 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539358 | orchestrator | 2026-02-08 05:55:09.539369 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 05:55:09.539380 | orchestrator | Sunday 08 February 2026 05:55:08 +0000 (0:00:00.136) 0:04:06.158 ******* 2026-02-08 05:55:09.539391 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539402 | orchestrator | 2026-02-08 05:55:09.539413 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 05:55:09.539424 | orchestrator | Sunday 08 February 2026 05:55:08 +0000 (0:00:00.183) 0:04:06.341 ******* 2026-02-08 05:55:09.539435 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539446 | orchestrator | 2026-02-08 05:55:09.539456 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 05:55:09.539467 | orchestrator | Sunday 08 February 2026 05:55:08 +0000 (0:00:00.132) 0:04:06.474 ******* 2026-02-08 05:55:09.539478 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539489 | orchestrator | 2026-02-08 05:55:09.539516 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 05:55:09.539528 | orchestrator | Sunday 08 February 2026 05:55:08 +0000 (0:00:00.422) 0:04:06.896 ******* 2026-02-08 05:55:09.539539 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539549 | orchestrator | 2026-02-08 05:55:09.539560 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 05:55:09.539571 | orchestrator | Sunday 08 February 2026 05:55:08 +0000 (0:00:00.145) 0:04:07.041 ******* 2026-02-08 05:55:09.539582 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539592 | orchestrator | 2026-02-08 05:55:09.539603 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 05:55:09.539614 | orchestrator | Sunday 08 February 2026 05:55:09 +0000 (0:00:00.168) 0:04:07.210 ******* 2026-02-08 05:55:09.539625 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539636 | orchestrator | 2026-02-08 05:55:09.539646 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 05:55:09.539664 | orchestrator | Sunday 08 February 2026 05:55:09 +0000 (0:00:00.230) 0:04:07.440 ******* 2026-02-08 05:55:09.539675 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:09.539686 | orchestrator | 2026-02-08 05:55:09.539704 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 05:55:30.767689 | orchestrator | Sunday 08 February 2026 05:55:09 +0000 (0:00:00.134) 0:04:07.575 ******* 2026-02-08 05:55:30.767805 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.767822 | orchestrator | 2026-02-08 05:55:30.767835 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 05:55:30.767846 | orchestrator | Sunday 08 February 2026 05:55:09 +0000 (0:00:00.229) 0:04:07.804 ******* 2026-02-08 05:55:30.767857 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.767869 | orchestrator | 2026-02-08 05:55:30.767880 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 05:55:30.767891 | orchestrator | Sunday 08 February 2026 05:55:09 +0000 (0:00:00.137) 0:04:07.941 ******* 2026-02-08 05:55:30.767902 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.767913 | orchestrator | 2026-02-08 05:55:30.767925 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 05:55:30.767937 | orchestrator | Sunday 08 February 2026 05:55:10 +0000 (0:00:00.128) 0:04:08.070 ******* 2026-02-08 05:55:30.767948 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.767959 | orchestrator | 2026-02-08 05:55:30.767970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 05:55:30.767981 | orchestrator | Sunday 08 February 2026 05:55:10 +0000 (0:00:00.135) 0:04:08.206 ******* 2026-02-08 05:55:30.767991 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.768002 | orchestrator | 2026-02-08 05:55:30.768013 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 05:55:30.768024 | orchestrator | Sunday 08 February 2026 05:55:10 +0000 (0:00:00.144) 0:04:08.350 ******* 2026-02-08 05:55:30.768035 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.768046 | orchestrator | 2026-02-08 05:55:30.768057 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 05:55:30.768067 | orchestrator | Sunday 08 February 2026 05:55:10 +0000 (0:00:00.153) 0:04:08.503 ******* 2026-02-08 05:55:30.768078 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.768089 | orchestrator | 2026-02-08 05:55:30.768100 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 05:55:30.768111 | orchestrator | Sunday 08 February 2026 05:55:10 +0000 (0:00:00.149) 0:04:08.652 ******* 2026-02-08 05:55:30.768122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 05:55:30.768133 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 05:55:30.768144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 05:55:30.768155 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.768166 | orchestrator | 2026-02-08 05:55:30.768177 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 05:55:30.768188 | orchestrator | Sunday 08 February 2026 05:55:11 +0000 (0:00:00.760) 0:04:09.412 ******* 2026-02-08 05:55:30.768198 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 05:55:30.768210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 05:55:30.768220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 05:55:30.768231 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.768242 | orchestrator | 2026-02-08 05:55:30.768253 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 05:55:30.768280 | orchestrator | Sunday 08 February 2026 05:55:12 +0000 (0:00:01.132) 0:04:10.545 ******* 2026-02-08 05:55:30.768291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 05:55:30.768302 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 05:55:30.768335 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 05:55:30.768347 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.768358 | orchestrator | 2026-02-08 05:55:30.768369 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 05:55:30.768380 | orchestrator | Sunday 08 February 2026 05:55:12 +0000 (0:00:00.419) 0:04:10.965 ******* 2026-02-08 05:55:30.768390 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.768401 | orchestrator | 2026-02-08 05:55:30.768412 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 05:55:30.768423 | orchestrator | Sunday 08 February 2026 05:55:13 +0000 (0:00:00.154) 0:04:11.119 ******* 2026-02-08 05:55:30.768434 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-08 05:55:30.768445 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.768456 | orchestrator | 2026-02-08 05:55:30.768467 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 05:55:30.768477 | orchestrator | Sunday 08 February 2026 05:55:13 +0000 (0:00:00.763) 0:04:11.883 ******* 2026-02-08 05:55:30.768488 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:55:30.768499 | orchestrator | 2026-02-08 05:55:30.768510 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-08 05:55:30.768548 | orchestrator | Sunday 08 February 2026 05:55:14 +0000 (0:00:00.883) 0:04:12.766 ******* 2026-02-08 05:55:30.768559 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.768570 | orchestrator | 2026-02-08 05:55:30.768581 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-08 05:55:30.768592 | orchestrator | Sunday 08 February 2026 05:55:14 +0000 (0:00:00.163) 0:04:12.929 ******* 2026-02-08 05:55:30.768603 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-08 05:55:30.768614 | orchestrator | 2026-02-08 05:55:30.768625 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-08 05:55:30.768636 | orchestrator | Sunday 08 February 2026 05:55:15 +0000 (0:00:00.656) 0:04:13.586 ******* 2026-02-08 05:55:30.768647 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-08 05:55:30.768658 | orchestrator | 2026-02-08 05:55:30.768668 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-08 05:55:30.768679 | orchestrator | Sunday 08 February 2026 05:55:17 +0000 (0:00:02.160) 0:04:15.747 ******* 2026-02-08 05:55:30.768690 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.768701 | orchestrator | 2026-02-08 05:55:30.768731 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-08 05:55:30.768743 | orchestrator | Sunday 08 February 2026 05:55:17 +0000 (0:00:00.197) 0:04:15.945 ******* 2026-02-08 05:55:30.768753 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.768764 | orchestrator | 2026-02-08 05:55:30.768776 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-08 05:55:30.768786 | orchestrator | Sunday 08 February 2026 05:55:18 +0000 (0:00:00.179) 0:04:16.124 ******* 2026-02-08 05:55:30.768797 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.768814 | orchestrator | 2026-02-08 05:55:30.768833 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-08 05:55:30.768852 | orchestrator | Sunday 08 February 2026 05:55:18 +0000 (0:00:00.453) 0:04:16.578 ******* 2026-02-08 05:55:30.768871 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:55:30.768892 | orchestrator | 2026-02-08 05:55:30.768910 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-08 05:55:30.769000 | orchestrator | Sunday 08 February 2026 05:55:19 +0000 (0:00:01.150) 0:04:17.729 ******* 2026-02-08 05:55:30.769012 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.769023 | orchestrator | 2026-02-08 05:55:30.769034 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-08 05:55:30.769045 | orchestrator | Sunday 08 February 2026 05:55:20 +0000 (0:00:00.596) 0:04:18.325 ******* 2026-02-08 05:55:30.769056 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.769080 | orchestrator | 2026-02-08 05:55:30.769090 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-08 05:55:30.769101 | orchestrator | Sunday 08 February 2026 05:55:20 +0000 (0:00:00.520) 0:04:18.845 ******* 2026-02-08 05:55:30.769112 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.769123 | orchestrator | 2026-02-08 05:55:30.769133 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-08 05:55:30.769144 | orchestrator | Sunday 08 February 2026 05:55:21 +0000 (0:00:00.573) 0:04:19.419 ******* 2026-02-08 05:55:30.769155 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.769166 | orchestrator | 2026-02-08 05:55:30.769177 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-08 05:55:30.769187 | orchestrator | Sunday 08 February 2026 05:55:22 +0000 (0:00:00.783) 0:04:20.203 ******* 2026-02-08 05:55:30.769198 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.769209 | orchestrator | 2026-02-08 05:55:30.769220 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-08 05:55:30.769231 | orchestrator | Sunday 08 February 2026 05:55:22 +0000 (0:00:00.754) 0:04:20.957 ******* 2026-02-08 05:55:30.769242 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-08 05:55:30.769253 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 05:55:30.769263 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 05:55:30.769274 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-08 05:55:30.769285 | orchestrator | 2026-02-08 05:55:30.769296 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-08 05:55:30.769307 | orchestrator | Sunday 08 February 2026 05:55:25 +0000 (0:00:02.866) 0:04:23.824 ******* 2026-02-08 05:55:30.769317 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:55:30.769328 | orchestrator | 2026-02-08 05:55:30.769339 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-08 05:55:30.769357 | orchestrator | Sunday 08 February 2026 05:55:26 +0000 (0:00:01.139) 0:04:24.963 ******* 2026-02-08 05:55:30.769368 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.769379 | orchestrator | 2026-02-08 05:55:30.769390 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-08 05:55:30.769401 | orchestrator | Sunday 08 February 2026 05:55:27 +0000 (0:00:00.173) 0:04:25.137 ******* 2026-02-08 05:55:30.769412 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.769422 | orchestrator | 2026-02-08 05:55:30.769433 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-08 05:55:30.769444 | orchestrator | Sunday 08 February 2026 05:55:27 +0000 (0:00:00.149) 0:04:25.286 ******* 2026-02-08 05:55:30.769455 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.769499 | orchestrator | 2026-02-08 05:55:30.769532 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-08 05:55:30.769544 | orchestrator | Sunday 08 February 2026 05:55:28 +0000 (0:00:01.098) 0:04:26.385 ******* 2026-02-08 05:55:30.769555 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:30.769565 | orchestrator | 2026-02-08 05:55:30.769576 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-08 05:55:30.769587 | orchestrator | Sunday 08 February 2026 05:55:28 +0000 (0:00:00.519) 0:04:26.905 ******* 2026-02-08 05:55:30.769598 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.769608 | orchestrator | 2026-02-08 05:55:30.769619 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-08 05:55:30.769630 | orchestrator | Sunday 08 February 2026 05:55:29 +0000 (0:00:00.444) 0:04:27.350 ******* 2026-02-08 05:55:30.769641 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-08 05:55:30.769652 | orchestrator | 2026-02-08 05:55:30.769662 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-08 05:55:30.769673 | orchestrator | Sunday 08 February 2026 05:55:29 +0000 (0:00:00.592) 0:04:27.942 ******* 2026-02-08 05:55:30.769684 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.769703 | orchestrator | 2026-02-08 05:55:30.769714 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-08 05:55:30.769725 | orchestrator | Sunday 08 February 2026 05:55:30 +0000 (0:00:00.144) 0:04:28.087 ******* 2026-02-08 05:55:30.769736 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:30.769747 | orchestrator | 2026-02-08 05:55:30.769758 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-08 05:55:30.769768 | orchestrator | Sunday 08 February 2026 05:55:30 +0000 (0:00:00.135) 0:04:28.222 ******* 2026-02-08 05:55:30.769780 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-08 05:55:30.769791 | orchestrator | 2026-02-08 05:55:30.769812 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-08 05:55:59.345649 | orchestrator | Sunday 08 February 2026 05:55:30 +0000 (0:00:00.580) 0:04:28.802 ******* 2026-02-08 05:55:59.345768 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:59.345785 | orchestrator | 2026-02-08 05:55:59.345798 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-08 05:55:59.345810 | orchestrator | Sunday 08 February 2026 05:55:32 +0000 (0:00:01.351) 0:04:30.154 ******* 2026-02-08 05:55:59.345821 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:59.345833 | orchestrator | 2026-02-08 05:55:59.345844 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-08 05:55:59.345855 | orchestrator | Sunday 08 February 2026 05:55:33 +0000 (0:00:01.051) 0:04:31.206 ******* 2026-02-08 05:55:59.345866 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:59.345877 | orchestrator | 2026-02-08 05:55:59.345888 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-08 05:55:59.345899 | orchestrator | Sunday 08 February 2026 05:55:34 +0000 (0:00:01.399) 0:04:32.605 ******* 2026-02-08 05:55:59.345910 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:55:59.345923 | orchestrator | 2026-02-08 05:55:59.345934 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-08 05:55:59.345945 | orchestrator | Sunday 08 February 2026 05:55:37 +0000 (0:00:03.405) 0:04:36.011 ******* 2026-02-08 05:55:59.345956 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-08 05:55:59.345968 | orchestrator | 2026-02-08 05:55:59.345979 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-08 05:55:59.345990 | orchestrator | Sunday 08 February 2026 05:55:38 +0000 (0:00:00.595) 0:04:36.606 ******* 2026-02-08 05:55:59.346001 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:59.346012 | orchestrator | 2026-02-08 05:55:59.346084 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-08 05:55:59.346103 | orchestrator | Sunday 08 February 2026 05:55:40 +0000 (0:00:01.520) 0:04:38.127 ******* 2026-02-08 05:55:59.346120 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:59.346137 | orchestrator | 2026-02-08 05:55:59.346157 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-08 05:55:59.346178 | orchestrator | Sunday 08 February 2026 05:55:42 +0000 (0:00:02.142) 0:04:40.270 ******* 2026-02-08 05:55:59.346198 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.346216 | orchestrator | 2026-02-08 05:55:59.346236 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-08 05:55:59.346254 | orchestrator | Sunday 08 February 2026 05:55:42 +0000 (0:00:00.142) 0:04:40.413 ******* 2026-02-08 05:55:59.346276 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-08 05:55:59.346318 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-08 05:55:59.346368 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-08 05:55:59.346391 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-08 05:55:59.346414 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-08 05:55:59.346436 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}])  2026-02-08 05:55:59.346454 | orchestrator | 2026-02-08 05:55:59.346487 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-08 05:55:59.346500 | orchestrator | Sunday 08 February 2026 05:55:51 +0000 (0:00:09.343) 0:04:49.756 ******* 2026-02-08 05:55:59.346511 | orchestrator | changed: [testbed-node-0] 2026-02-08 05:55:59.346522 | orchestrator | 2026-02-08 05:55:59.346559 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 05:55:59.346571 | orchestrator | Sunday 08 February 2026 05:55:53 +0000 (0:00:01.487) 0:04:51.244 ******* 2026-02-08 05:55:59.346582 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 05:55:59.346593 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-08 05:55:59.346604 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-08 05:55:59.346615 | orchestrator | 2026-02-08 05:55:59.346626 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 05:55:59.346637 | orchestrator | Sunday 08 February 2026 05:55:54 +0000 (0:00:01.209) 0:04:52.454 ******* 2026-02-08 05:55:59.346648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 05:55:59.346659 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 05:55:59.346670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 05:55:59.346681 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.346692 | orchestrator | 2026-02-08 05:55:59.346702 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-08 05:55:59.346713 | orchestrator | Sunday 08 February 2026 05:55:54 +0000 (0:00:00.475) 0:04:52.929 ******* 2026-02-08 05:55:59.346724 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.346735 | orchestrator | 2026-02-08 05:55:59.346746 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-08 05:55:59.346757 | orchestrator | Sunday 08 February 2026 05:55:55 +0000 (0:00:00.128) 0:04:53.058 ******* 2026-02-08 05:55:59.346768 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-08 05:55:59.346779 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-08 05:55:59.346811 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:55:59.346822 | orchestrator | 2026-02-08 05:55:59.346833 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 05:55:59.346843 | orchestrator | Sunday 08 February 2026 05:55:56 +0000 (0:00:01.381) 0:04:54.439 ******* 2026-02-08 05:55:59.346854 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.346865 | orchestrator | 2026-02-08 05:55:59.346876 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-08 05:55:59.346887 | orchestrator | Sunday 08 February 2026 05:55:56 +0000 (0:00:00.149) 0:04:54.589 ******* 2026-02-08 05:55:59.346898 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.346909 | orchestrator | 2026-02-08 05:55:59.346920 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-08 05:55:59.346931 | orchestrator | Sunday 08 February 2026 05:55:56 +0000 (0:00:00.449) 0:04:55.038 ******* 2026-02-08 05:55:59.346942 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.346953 | orchestrator | 2026-02-08 05:55:59.346963 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-08 05:55:59.346981 | orchestrator | Sunday 08 February 2026 05:55:57 +0000 (0:00:00.134) 0:04:55.172 ******* 2026-02-08 05:55:59.346992 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.347003 | orchestrator | 2026-02-08 05:55:59.347013 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-08 05:55:59.347024 | orchestrator | Sunday 08 February 2026 05:55:57 +0000 (0:00:00.147) 0:04:55.320 ******* 2026-02-08 05:55:59.347035 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.347046 | orchestrator | 2026-02-08 05:55:59.347057 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-08 05:55:59.347067 | orchestrator | Sunday 08 February 2026 05:55:57 +0000 (0:00:00.155) 0:04:55.476 ******* 2026-02-08 05:55:59.347078 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.347089 | orchestrator | 2026-02-08 05:55:59.347100 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-08 05:55:59.347111 | orchestrator | Sunday 08 February 2026 05:55:57 +0000 (0:00:00.127) 0:04:55.603 ******* 2026-02-08 05:55:59.347121 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:55:59.347132 | orchestrator | 2026-02-08 05:55:59.347143 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-08 05:55:59.347154 | orchestrator | 2026-02-08 05:55:59.347165 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-08 05:55:59.347175 | orchestrator | Sunday 08 February 2026 05:55:58 +0000 (0:00:00.592) 0:04:56.196 ******* 2026-02-08 05:55:59.347186 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:55:59.347197 | orchestrator | 2026-02-08 05:55:59.347208 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-08 05:55:59.347219 | orchestrator | Sunday 08 February 2026 05:55:58 +0000 (0:00:00.469) 0:04:56.665 ******* 2026-02-08 05:55:59.347230 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:55:59.347240 | orchestrator | 2026-02-08 05:55:59.347251 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-08 05:55:59.347262 | orchestrator | Sunday 08 February 2026 05:55:58 +0000 (0:00:00.141) 0:04:56.807 ******* 2026-02-08 05:55:59.347273 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:55:59.347284 | orchestrator | 2026-02-08 05:55:59.347295 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-08 05:55:59.347306 | orchestrator | Sunday 08 February 2026 05:55:58 +0000 (0:00:00.136) 0:04:56.943 ******* 2026-02-08 05:55:59.347317 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:55:59.347328 | orchestrator | 2026-02-08 05:55:59.347341 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 05:55:59.347360 | orchestrator | Sunday 08 February 2026 05:55:59 +0000 (0:00:00.161) 0:04:57.105 ******* 2026-02-08 05:55:59.347377 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-08 05:55:59.347404 | orchestrator | 2026-02-08 05:55:59.347434 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 05:56:07.715845 | orchestrator | Sunday 08 February 2026 05:55:59 +0000 (0:00:00.270) 0:04:57.375 ******* 2026-02-08 05:56:07.715956 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:07.715975 | orchestrator | 2026-02-08 05:56:07.715989 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 05:56:07.716001 | orchestrator | Sunday 08 February 2026 05:56:00 +0000 (0:00:00.700) 0:04:58.075 ******* 2026-02-08 05:56:07.716012 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:07.716023 | orchestrator | 2026-02-08 05:56:07.716035 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 05:56:07.716046 | orchestrator | Sunday 08 February 2026 05:56:00 +0000 (0:00:00.157) 0:04:58.233 ******* 2026-02-08 05:56:07.716057 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:07.716068 | orchestrator | 2026-02-08 05:56:07.716079 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 05:56:07.716090 | orchestrator | Sunday 08 February 2026 05:56:00 +0000 (0:00:00.472) 0:04:58.706 ******* 2026-02-08 05:56:07.716101 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:07.716111 | orchestrator | 2026-02-08 05:56:07.716122 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 05:56:07.716133 | orchestrator | Sunday 08 February 2026 05:56:00 +0000 (0:00:00.170) 0:04:58.876 ******* 2026-02-08 05:56:07.716144 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:07.716154 | orchestrator | 2026-02-08 05:56:07.716165 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 05:56:07.716176 | orchestrator | Sunday 08 February 2026 05:56:00 +0000 (0:00:00.158) 0:04:59.035 ******* 2026-02-08 05:56:07.716187 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:07.716198 | orchestrator | 2026-02-08 05:56:07.716209 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 05:56:07.716220 | orchestrator | Sunday 08 February 2026 05:56:01 +0000 (0:00:00.183) 0:04:59.218 ******* 2026-02-08 05:56:07.716231 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:07.716243 | orchestrator | 2026-02-08 05:56:07.716254 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 05:56:07.716265 | orchestrator | Sunday 08 February 2026 05:56:01 +0000 (0:00:00.161) 0:04:59.379 ******* 2026-02-08 05:56:07.716276 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:07.716287 | orchestrator | 2026-02-08 05:56:07.716298 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 05:56:07.716309 | orchestrator | Sunday 08 February 2026 05:56:01 +0000 (0:00:00.144) 0:04:59.524 ******* 2026-02-08 05:56:07.716320 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:56:07.716331 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 05:56:07.716342 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:56:07.716353 | orchestrator | 2026-02-08 05:56:07.716364 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 05:56:07.716376 | orchestrator | Sunday 08 February 2026 05:56:02 +0000 (0:00:00.957) 0:05:00.481 ******* 2026-02-08 05:56:07.716390 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:07.716402 | orchestrator | 2026-02-08 05:56:07.716415 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 05:56:07.716445 | orchestrator | Sunday 08 February 2026 05:56:02 +0000 (0:00:00.256) 0:05:00.738 ******* 2026-02-08 05:56:07.716459 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:56:07.716472 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 05:56:07.716486 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:56:07.716499 | orchestrator | 2026-02-08 05:56:07.716512 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 05:56:07.716568 | orchestrator | Sunday 08 February 2026 05:56:05 +0000 (0:00:02.332) 0:05:03.070 ******* 2026-02-08 05:56:07.716583 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 05:56:07.716596 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 05:56:07.716609 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 05:56:07.716620 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:07.716631 | orchestrator | 2026-02-08 05:56:07.716641 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 05:56:07.716652 | orchestrator | Sunday 08 February 2026 05:56:05 +0000 (0:00:00.493) 0:05:03.564 ******* 2026-02-08 05:56:07.716664 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 05:56:07.716678 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 05:56:07.716689 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 05:56:07.716700 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:07.716711 | orchestrator | 2026-02-08 05:56:07.716722 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 05:56:07.716734 | orchestrator | Sunday 08 February 2026 05:56:06 +0000 (0:00:00.990) 0:05:04.554 ******* 2026-02-08 05:56:07.716764 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:07.716780 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:07.716791 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:07.716803 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:07.716814 | orchestrator | 2026-02-08 05:56:07.716825 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 05:56:07.716836 | orchestrator | Sunday 08 February 2026 05:56:06 +0000 (0:00:00.168) 0:05:04.723 ******* 2026-02-08 05:56:07.716849 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 05:56:03.346391', 'end': '2026-02-08 05:56:03.389118', 'delta': '0:00:00.042727', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 05:56:07.716877 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd108d94fad94', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 05:56:04.198614', 'end': '2026-02-08 05:56:04.248280', 'delta': '0:00:00.049666', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d108d94fad94'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 05:56:07.716890 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '83b6b87b68f7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 05:56:04.807816', 'end': '2026-02-08 05:56:04.853551', 'delta': '0:00:00.045735', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['83b6b87b68f7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 05:56:07.716901 | orchestrator | 2026-02-08 05:56:07.716913 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 05:56:07.716924 | orchestrator | Sunday 08 February 2026 05:56:07 +0000 (0:00:00.518) 0:05:05.242 ******* 2026-02-08 05:56:07.716935 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:07.716946 | orchestrator | 2026-02-08 05:56:07.716957 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 05:56:07.716967 | orchestrator | Sunday 08 February 2026 05:56:07 +0000 (0:00:00.272) 0:05:05.514 ******* 2026-02-08 05:56:07.716978 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:07.716990 | orchestrator | 2026-02-08 05:56:07.717001 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 05:56:07.717018 | orchestrator | Sunday 08 February 2026 05:56:07 +0000 (0:00:00.243) 0:05:05.758 ******* 2026-02-08 05:56:11.093442 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:11.093639 | orchestrator | 2026-02-08 05:56:11.093668 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 05:56:11.093690 | orchestrator | Sunday 08 February 2026 05:56:07 +0000 (0:00:00.158) 0:05:05.916 ******* 2026-02-08 05:56:11.093711 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-08 05:56:11.093730 | orchestrator | 2026-02-08 05:56:11.093750 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 05:56:11.093762 | orchestrator | Sunday 08 February 2026 05:56:08 +0000 (0:00:00.969) 0:05:06.886 ******* 2026-02-08 05:56:11.093774 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:11.093785 | orchestrator | 2026-02-08 05:56:11.093796 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 05:56:11.093807 | orchestrator | Sunday 08 February 2026 05:56:08 +0000 (0:00:00.149) 0:05:07.036 ******* 2026-02-08 05:56:11.093818 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.093829 | orchestrator | 2026-02-08 05:56:11.093840 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 05:56:11.093851 | orchestrator | Sunday 08 February 2026 05:56:09 +0000 (0:00:00.242) 0:05:07.278 ******* 2026-02-08 05:56:11.093862 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.093872 | orchestrator | 2026-02-08 05:56:11.093883 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 05:56:11.093921 | orchestrator | Sunday 08 February 2026 05:56:09 +0000 (0:00:00.227) 0:05:07.505 ******* 2026-02-08 05:56:11.093932 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.093944 | orchestrator | 2026-02-08 05:56:11.093957 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 05:56:11.093970 | orchestrator | Sunday 08 February 2026 05:56:09 +0000 (0:00:00.166) 0:05:07.671 ******* 2026-02-08 05:56:11.093983 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.093995 | orchestrator | 2026-02-08 05:56:11.094008 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 05:56:11.094105 | orchestrator | Sunday 08 February 2026 05:56:09 +0000 (0:00:00.135) 0:05:07.807 ******* 2026-02-08 05:56:11.094129 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.094150 | orchestrator | 2026-02-08 05:56:11.094181 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 05:56:11.094195 | orchestrator | Sunday 08 February 2026 05:56:09 +0000 (0:00:00.133) 0:05:07.940 ******* 2026-02-08 05:56:11.094208 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.094221 | orchestrator | 2026-02-08 05:56:11.094234 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 05:56:11.094247 | orchestrator | Sunday 08 February 2026 05:56:10 +0000 (0:00:00.123) 0:05:08.063 ******* 2026-02-08 05:56:11.094259 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.094273 | orchestrator | 2026-02-08 05:56:11.094287 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 05:56:11.094301 | orchestrator | Sunday 08 February 2026 05:56:10 +0000 (0:00:00.136) 0:05:08.200 ******* 2026-02-08 05:56:11.094312 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.094323 | orchestrator | 2026-02-08 05:56:11.094334 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 05:56:11.094361 | orchestrator | Sunday 08 February 2026 05:56:10 +0000 (0:00:00.505) 0:05:08.706 ******* 2026-02-08 05:56:11.094372 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.094383 | orchestrator | 2026-02-08 05:56:11.094394 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 05:56:11.094411 | orchestrator | Sunday 08 February 2026 05:56:10 +0000 (0:00:00.153) 0:05:08.859 ******* 2026-02-08 05:56:11.094433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:56:11.094455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:56:11.094473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:56:11.094520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:56:11.094586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:56:11.094608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:56:11.094628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:56:11.094660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bd3944a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:56:11.094677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:56:11.094707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:56:11.346307 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:11.346409 | orchestrator | 2026-02-08 05:56:11.346425 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 05:56:11.346438 | orchestrator | Sunday 08 February 2026 05:56:11 +0000 (0:00:00.270) 0:05:09.130 ******* 2026-02-08 05:56:11.346453 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:11.346469 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:11.346499 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:11.346513 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:11.346624 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:11.346679 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:11.346726 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:11.346760 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bd3944a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:11.346783 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:11.346825 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:56:26.456778 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.456895 | orchestrator | 2026-02-08 05:56:26.456912 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 05:56:26.456926 | orchestrator | Sunday 08 February 2026 05:56:11 +0000 (0:00:00.251) 0:05:09.382 ******* 2026-02-08 05:56:26.456937 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:26.456949 | orchestrator | 2026-02-08 05:56:26.456961 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 05:56:26.456972 | orchestrator | Sunday 08 February 2026 05:56:11 +0000 (0:00:00.549) 0:05:09.931 ******* 2026-02-08 05:56:26.456983 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:26.456994 | orchestrator | 2026-02-08 05:56:26.457005 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 05:56:26.457016 | orchestrator | Sunday 08 February 2026 05:56:12 +0000 (0:00:00.153) 0:05:10.085 ******* 2026-02-08 05:56:26.457028 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:26.457047 | orchestrator | 2026-02-08 05:56:26.457065 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 05:56:26.457083 | orchestrator | Sunday 08 February 2026 05:56:12 +0000 (0:00:00.522) 0:05:10.608 ******* 2026-02-08 05:56:26.457101 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.457119 | orchestrator | 2026-02-08 05:56:26.457137 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 05:56:26.457155 | orchestrator | Sunday 08 February 2026 05:56:12 +0000 (0:00:00.135) 0:05:10.743 ******* 2026-02-08 05:56:26.457173 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.457191 | orchestrator | 2026-02-08 05:56:26.457210 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 05:56:26.457229 | orchestrator | Sunday 08 February 2026 05:56:12 +0000 (0:00:00.246) 0:05:10.989 ******* 2026-02-08 05:56:26.457248 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.457268 | orchestrator | 2026-02-08 05:56:26.457287 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 05:56:26.457307 | orchestrator | Sunday 08 February 2026 05:56:13 +0000 (0:00:00.152) 0:05:11.142 ******* 2026-02-08 05:56:26.457324 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-08 05:56:26.457344 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 05:56:26.457363 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-08 05:56:26.457380 | orchestrator | 2026-02-08 05:56:26.457400 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 05:56:26.457421 | orchestrator | Sunday 08 February 2026 05:56:14 +0000 (0:00:00.989) 0:05:12.132 ******* 2026-02-08 05:56:26.457460 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 05:56:26.457482 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 05:56:26.457501 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 05:56:26.457584 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.457599 | orchestrator | 2026-02-08 05:56:26.457612 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 05:56:26.457625 | orchestrator | Sunday 08 February 2026 05:56:14 +0000 (0:00:00.170) 0:05:12.302 ******* 2026-02-08 05:56:26.457637 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.457649 | orchestrator | 2026-02-08 05:56:26.457662 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 05:56:26.457676 | orchestrator | Sunday 08 February 2026 05:56:14 +0000 (0:00:00.189) 0:05:12.492 ******* 2026-02-08 05:56:26.457688 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:56:26.457701 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 05:56:26.457712 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:56:26.457745 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 05:56:26.457756 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 05:56:26.457768 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 05:56:26.457780 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 05:56:26.457791 | orchestrator | 2026-02-08 05:56:26.457802 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 05:56:26.457812 | orchestrator | Sunday 08 February 2026 05:56:15 +0000 (0:00:01.473) 0:05:13.966 ******* 2026-02-08 05:56:26.457823 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:56:26.457834 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 05:56:26.457845 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 05:56:26.457856 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 05:56:26.457867 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 05:56:26.457878 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 05:56:26.457888 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 05:56:26.457913 | orchestrator | 2026-02-08 05:56:26.457924 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-08 05:56:26.457935 | orchestrator | Sunday 08 February 2026 05:56:17 +0000 (0:00:01.621) 0:05:15.588 ******* 2026-02-08 05:56:26.457946 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.457957 | orchestrator | 2026-02-08 05:56:26.457968 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-08 05:56:26.457979 | orchestrator | Sunday 08 February 2026 05:56:17 +0000 (0:00:00.231) 0:05:15.820 ******* 2026-02-08 05:56:26.457990 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458001 | orchestrator | 2026-02-08 05:56:26.458098 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-08 05:56:26.458114 | orchestrator | Sunday 08 February 2026 05:56:18 +0000 (0:00:00.242) 0:05:16.063 ******* 2026-02-08 05:56:26.458125 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458137 | orchestrator | 2026-02-08 05:56:26.458148 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-08 05:56:26.458159 | orchestrator | Sunday 08 February 2026 05:56:18 +0000 (0:00:00.136) 0:05:16.199 ******* 2026-02-08 05:56:26.458170 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458181 | orchestrator | 2026-02-08 05:56:26.458192 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-08 05:56:26.458203 | orchestrator | Sunday 08 February 2026 05:56:18 +0000 (0:00:00.232) 0:05:16.432 ******* 2026-02-08 05:56:26.458214 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458225 | orchestrator | 2026-02-08 05:56:26.458256 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-08 05:56:26.458267 | orchestrator | Sunday 08 February 2026 05:56:18 +0000 (0:00:00.149) 0:05:16.581 ******* 2026-02-08 05:56:26.458278 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 05:56:26.458289 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 05:56:26.458300 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 05:56:26.458311 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458322 | orchestrator | 2026-02-08 05:56:26.458332 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-08 05:56:26.458343 | orchestrator | Sunday 08 February 2026 05:56:18 +0000 (0:00:00.448) 0:05:17.030 ******* 2026-02-08 05:56:26.458354 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-08 05:56:26.458365 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-08 05:56:26.458376 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-08 05:56:26.458387 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-08 05:56:26.458397 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-08 05:56:26.458408 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-08 05:56:26.458419 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458430 | orchestrator | 2026-02-08 05:56:26.458441 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-08 05:56:26.458452 | orchestrator | Sunday 08 February 2026 05:56:20 +0000 (0:00:01.050) 0:05:18.081 ******* 2026-02-08 05:56:26.458463 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 05:56:26.458474 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 05:56:26.458485 | orchestrator | 2026-02-08 05:56:26.458496 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-08 05:56:26.458507 | orchestrator | Sunday 08 February 2026 05:56:22 +0000 (0:00:02.418) 0:05:20.499 ******* 2026-02-08 05:56:26.458518 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:56:26.458528 | orchestrator | 2026-02-08 05:56:26.458558 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 05:56:26.458570 | orchestrator | Sunday 08 February 2026 05:56:23 +0000 (0:00:01.429) 0:05:21.929 ******* 2026-02-08 05:56:26.458580 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-08 05:56:26.458592 | orchestrator | 2026-02-08 05:56:26.458603 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 05:56:26.458614 | orchestrator | Sunday 08 February 2026 05:56:24 +0000 (0:00:00.498) 0:05:22.427 ******* 2026-02-08 05:56:26.458625 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-08 05:56:26.458635 | orchestrator | 2026-02-08 05:56:26.458646 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 05:56:26.458657 | orchestrator | Sunday 08 February 2026 05:56:24 +0000 (0:00:00.207) 0:05:22.635 ******* 2026-02-08 05:56:26.458667 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:26.458678 | orchestrator | 2026-02-08 05:56:26.458689 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 05:56:26.458700 | orchestrator | Sunday 08 February 2026 05:56:25 +0000 (0:00:00.526) 0:05:23.161 ******* 2026-02-08 05:56:26.458711 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458721 | orchestrator | 2026-02-08 05:56:26.458732 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 05:56:26.458743 | orchestrator | Sunday 08 February 2026 05:56:25 +0000 (0:00:00.164) 0:05:23.325 ******* 2026-02-08 05:56:26.458753 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458764 | orchestrator | 2026-02-08 05:56:26.458775 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 05:56:26.458793 | orchestrator | Sunday 08 February 2026 05:56:25 +0000 (0:00:00.140) 0:05:23.466 ******* 2026-02-08 05:56:26.458804 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458815 | orchestrator | 2026-02-08 05:56:26.458826 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 05:56:26.458837 | orchestrator | Sunday 08 February 2026 05:56:25 +0000 (0:00:00.144) 0:05:23.610 ******* 2026-02-08 05:56:26.458847 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:26.458858 | orchestrator | 2026-02-08 05:56:26.458869 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 05:56:26.458880 | orchestrator | Sunday 08 February 2026 05:56:26 +0000 (0:00:00.600) 0:05:24.210 ******* 2026-02-08 05:56:26.458891 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:26.458902 | orchestrator | 2026-02-08 05:56:26.458912 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 05:56:26.458923 | orchestrator | Sunday 08 February 2026 05:56:26 +0000 (0:00:00.159) 0:05:24.370 ******* 2026-02-08 05:56:26.458942 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.774243 | orchestrator | 2026-02-08 05:56:37.774362 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 05:56:37.774429 | orchestrator | Sunday 08 February 2026 05:56:26 +0000 (0:00:00.125) 0:05:24.496 ******* 2026-02-08 05:56:37.774443 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.774456 | orchestrator | 2026-02-08 05:56:37.774468 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 05:56:37.774480 | orchestrator | Sunday 08 February 2026 05:56:27 +0000 (0:00:00.558) 0:05:25.054 ******* 2026-02-08 05:56:37.774491 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.774502 | orchestrator | 2026-02-08 05:56:37.774513 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 05:56:37.774525 | orchestrator | Sunday 08 February 2026 05:56:27 +0000 (0:00:00.549) 0:05:25.604 ******* 2026-02-08 05:56:37.774536 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.774626 | orchestrator | 2026-02-08 05:56:37.774643 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 05:56:37.774654 | orchestrator | Sunday 08 February 2026 05:56:27 +0000 (0:00:00.130) 0:05:25.735 ******* 2026-02-08 05:56:37.774665 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.774677 | orchestrator | 2026-02-08 05:56:37.774688 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 05:56:37.774699 | orchestrator | Sunday 08 February 2026 05:56:28 +0000 (0:00:00.458) 0:05:26.193 ******* 2026-02-08 05:56:37.774710 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.774721 | orchestrator | 2026-02-08 05:56:37.774732 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 05:56:37.774743 | orchestrator | Sunday 08 February 2026 05:56:28 +0000 (0:00:00.132) 0:05:26.326 ******* 2026-02-08 05:56:37.774753 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.774764 | orchestrator | 2026-02-08 05:56:37.774776 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 05:56:37.774789 | orchestrator | Sunday 08 February 2026 05:56:28 +0000 (0:00:00.138) 0:05:26.465 ******* 2026-02-08 05:56:37.774802 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.774814 | orchestrator | 2026-02-08 05:56:37.774827 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 05:56:37.774840 | orchestrator | Sunday 08 February 2026 05:56:28 +0000 (0:00:00.130) 0:05:26.595 ******* 2026-02-08 05:56:37.774852 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.774865 | orchestrator | 2026-02-08 05:56:37.774877 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 05:56:37.774896 | orchestrator | Sunday 08 February 2026 05:56:28 +0000 (0:00:00.147) 0:05:26.743 ******* 2026-02-08 05:56:37.774909 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.774923 | orchestrator | 2026-02-08 05:56:37.774935 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 05:56:37.774972 | orchestrator | Sunday 08 February 2026 05:56:28 +0000 (0:00:00.151) 0:05:26.894 ******* 2026-02-08 05:56:37.774984 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.774997 | orchestrator | 2026-02-08 05:56:37.775009 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 05:56:37.775022 | orchestrator | Sunday 08 February 2026 05:56:29 +0000 (0:00:00.180) 0:05:27.075 ******* 2026-02-08 05:56:37.775035 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.775047 | orchestrator | 2026-02-08 05:56:37.775059 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 05:56:37.775072 | orchestrator | Sunday 08 February 2026 05:56:29 +0000 (0:00:00.165) 0:05:27.241 ******* 2026-02-08 05:56:37.775085 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.775098 | orchestrator | 2026-02-08 05:56:37.775110 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 05:56:37.775123 | orchestrator | Sunday 08 February 2026 05:56:29 +0000 (0:00:00.250) 0:05:27.491 ******* 2026-02-08 05:56:37.775136 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775148 | orchestrator | 2026-02-08 05:56:37.775159 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 05:56:37.775170 | orchestrator | Sunday 08 February 2026 05:56:29 +0000 (0:00:00.138) 0:05:27.630 ******* 2026-02-08 05:56:37.775180 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775191 | orchestrator | 2026-02-08 05:56:37.775202 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 05:56:37.775213 | orchestrator | Sunday 08 February 2026 05:56:29 +0000 (0:00:00.178) 0:05:27.809 ******* 2026-02-08 05:56:37.775223 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775234 | orchestrator | 2026-02-08 05:56:37.775245 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 05:56:37.775256 | orchestrator | Sunday 08 February 2026 05:56:29 +0000 (0:00:00.122) 0:05:27.932 ******* 2026-02-08 05:56:37.775266 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775277 | orchestrator | 2026-02-08 05:56:37.775288 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 05:56:37.775298 | orchestrator | Sunday 08 February 2026 05:56:30 +0000 (0:00:00.120) 0:05:28.053 ******* 2026-02-08 05:56:37.775313 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775332 | orchestrator | 2026-02-08 05:56:37.775348 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 05:56:37.775366 | orchestrator | Sunday 08 February 2026 05:56:30 +0000 (0:00:00.482) 0:05:28.535 ******* 2026-02-08 05:56:37.775384 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775401 | orchestrator | 2026-02-08 05:56:37.775420 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 05:56:37.775440 | orchestrator | Sunday 08 February 2026 05:56:30 +0000 (0:00:00.140) 0:05:28.676 ******* 2026-02-08 05:56:37.775459 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775477 | orchestrator | 2026-02-08 05:56:37.775496 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 05:56:37.775516 | orchestrator | Sunday 08 February 2026 05:56:30 +0000 (0:00:00.133) 0:05:28.809 ******* 2026-02-08 05:56:37.775533 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775578 | orchestrator | 2026-02-08 05:56:37.775599 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 05:56:37.775642 | orchestrator | Sunday 08 February 2026 05:56:30 +0000 (0:00:00.151) 0:05:28.961 ******* 2026-02-08 05:56:37.775663 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775682 | orchestrator | 2026-02-08 05:56:37.775699 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 05:56:37.775717 | orchestrator | Sunday 08 February 2026 05:56:31 +0000 (0:00:00.154) 0:05:29.116 ******* 2026-02-08 05:56:37.775737 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775755 | orchestrator | 2026-02-08 05:56:37.775789 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 05:56:37.775808 | orchestrator | Sunday 08 February 2026 05:56:31 +0000 (0:00:00.152) 0:05:29.268 ******* 2026-02-08 05:56:37.775826 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775837 | orchestrator | 2026-02-08 05:56:37.775848 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 05:56:37.775859 | orchestrator | Sunday 08 February 2026 05:56:31 +0000 (0:00:00.143) 0:05:29.411 ******* 2026-02-08 05:56:37.775870 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.775881 | orchestrator | 2026-02-08 05:56:37.775891 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 05:56:37.775902 | orchestrator | Sunday 08 February 2026 05:56:31 +0000 (0:00:00.215) 0:05:29.626 ******* 2026-02-08 05:56:37.775913 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.775924 | orchestrator | 2026-02-08 05:56:37.775934 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 05:56:37.775945 | orchestrator | Sunday 08 February 2026 05:56:32 +0000 (0:00:00.917) 0:05:30.544 ******* 2026-02-08 05:56:37.775956 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.775966 | orchestrator | 2026-02-08 05:56:37.775977 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 05:56:37.775988 | orchestrator | Sunday 08 February 2026 05:56:33 +0000 (0:00:01.351) 0:05:31.896 ******* 2026-02-08 05:56:37.775999 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-08 05:56:37.776011 | orchestrator | 2026-02-08 05:56:37.776021 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 05:56:37.776032 | orchestrator | Sunday 08 February 2026 05:56:34 +0000 (0:00:00.227) 0:05:32.123 ******* 2026-02-08 05:56:37.776043 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.776079 | orchestrator | 2026-02-08 05:56:37.776090 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 05:56:37.776107 | orchestrator | Sunday 08 February 2026 05:56:34 +0000 (0:00:00.436) 0:05:32.560 ******* 2026-02-08 05:56:37.776119 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.776129 | orchestrator | 2026-02-08 05:56:37.776140 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 05:56:37.776151 | orchestrator | Sunday 08 February 2026 05:56:34 +0000 (0:00:00.159) 0:05:32.719 ******* 2026-02-08 05:56:37.776162 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 05:56:37.776173 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 05:56:37.776184 | orchestrator | 2026-02-08 05:56:37.776195 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 05:56:37.776206 | orchestrator | Sunday 08 February 2026 05:56:35 +0000 (0:00:00.852) 0:05:33.572 ******* 2026-02-08 05:56:37.776217 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.776227 | orchestrator | 2026-02-08 05:56:37.776238 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 05:56:37.776249 | orchestrator | Sunday 08 February 2026 05:56:36 +0000 (0:00:00.554) 0:05:34.126 ******* 2026-02-08 05:56:37.776260 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.776271 | orchestrator | 2026-02-08 05:56:37.776282 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 05:56:37.776293 | orchestrator | Sunday 08 February 2026 05:56:36 +0000 (0:00:00.167) 0:05:34.294 ******* 2026-02-08 05:56:37.776304 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.776314 | orchestrator | 2026-02-08 05:56:37.776326 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 05:56:37.776336 | orchestrator | Sunday 08 February 2026 05:56:36 +0000 (0:00:00.145) 0:05:34.439 ******* 2026-02-08 05:56:37.776347 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.776358 | orchestrator | 2026-02-08 05:56:37.776369 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 05:56:37.776387 | orchestrator | Sunday 08 February 2026 05:56:36 +0000 (0:00:00.129) 0:05:34.569 ******* 2026-02-08 05:56:37.776397 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-08 05:56:37.776408 | orchestrator | 2026-02-08 05:56:37.776419 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 05:56:37.776430 | orchestrator | Sunday 08 February 2026 05:56:36 +0000 (0:00:00.201) 0:05:34.771 ******* 2026-02-08 05:56:37.776440 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:37.776451 | orchestrator | 2026-02-08 05:56:37.776462 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 05:56:37.776473 | orchestrator | Sunday 08 February 2026 05:56:37 +0000 (0:00:00.735) 0:05:35.507 ******* 2026-02-08 05:56:37.776484 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 05:56:37.776494 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 05:56:37.776505 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 05:56:37.776516 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.776527 | orchestrator | 2026-02-08 05:56:37.776538 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 05:56:37.776599 | orchestrator | Sunday 08 February 2026 05:56:37 +0000 (0:00:00.169) 0:05:35.676 ******* 2026-02-08 05:56:37.776611 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:37.776622 | orchestrator | 2026-02-08 05:56:37.776643 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 05:56:51.319586 | orchestrator | Sunday 08 February 2026 05:56:37 +0000 (0:00:00.136) 0:05:35.812 ******* 2026-02-08 05:56:51.319689 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.319705 | orchestrator | 2026-02-08 05:56:51.319716 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 05:56:51.319726 | orchestrator | Sunday 08 February 2026 05:56:37 +0000 (0:00:00.174) 0:05:35.987 ******* 2026-02-08 05:56:51.319735 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.319744 | orchestrator | 2026-02-08 05:56:51.319753 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 05:56:51.319762 | orchestrator | Sunday 08 February 2026 05:56:38 +0000 (0:00:00.429) 0:05:36.416 ******* 2026-02-08 05:56:51.319771 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.319780 | orchestrator | 2026-02-08 05:56:51.319788 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 05:56:51.319797 | orchestrator | Sunday 08 February 2026 05:56:38 +0000 (0:00:00.155) 0:05:36.572 ******* 2026-02-08 05:56:51.319806 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.319815 | orchestrator | 2026-02-08 05:56:51.319823 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 05:56:51.319832 | orchestrator | Sunday 08 February 2026 05:56:38 +0000 (0:00:00.154) 0:05:36.726 ******* 2026-02-08 05:56:51.319841 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:51.319850 | orchestrator | 2026-02-08 05:56:51.319859 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 05:56:51.319869 | orchestrator | Sunday 08 February 2026 05:56:40 +0000 (0:00:01.612) 0:05:38.339 ******* 2026-02-08 05:56:51.319877 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:51.319886 | orchestrator | 2026-02-08 05:56:51.319894 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 05:56:51.319903 | orchestrator | Sunday 08 February 2026 05:56:40 +0000 (0:00:00.162) 0:05:38.502 ******* 2026-02-08 05:56:51.319912 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-08 05:56:51.319920 | orchestrator | 2026-02-08 05:56:51.319929 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 05:56:51.319938 | orchestrator | Sunday 08 February 2026 05:56:40 +0000 (0:00:00.227) 0:05:38.729 ******* 2026-02-08 05:56:51.319946 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.319973 | orchestrator | 2026-02-08 05:56:51.319982 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 05:56:51.319998 | orchestrator | Sunday 08 February 2026 05:56:40 +0000 (0:00:00.156) 0:05:38.885 ******* 2026-02-08 05:56:51.320007 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320015 | orchestrator | 2026-02-08 05:56:51.320024 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 05:56:51.320032 | orchestrator | Sunday 08 February 2026 05:56:40 +0000 (0:00:00.141) 0:05:39.027 ******* 2026-02-08 05:56:51.320041 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320049 | orchestrator | 2026-02-08 05:56:51.320057 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 05:56:51.320066 | orchestrator | Sunday 08 February 2026 05:56:41 +0000 (0:00:00.162) 0:05:39.190 ******* 2026-02-08 05:56:51.320074 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320083 | orchestrator | 2026-02-08 05:56:51.320091 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 05:56:51.320101 | orchestrator | Sunday 08 February 2026 05:56:41 +0000 (0:00:00.174) 0:05:39.364 ******* 2026-02-08 05:56:51.320111 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320121 | orchestrator | 2026-02-08 05:56:51.320131 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 05:56:51.320142 | orchestrator | Sunday 08 February 2026 05:56:41 +0000 (0:00:00.163) 0:05:39.528 ******* 2026-02-08 05:56:51.320152 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320221 | orchestrator | 2026-02-08 05:56:51.320231 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 05:56:51.320241 | orchestrator | Sunday 08 February 2026 05:56:41 +0000 (0:00:00.155) 0:05:39.683 ******* 2026-02-08 05:56:51.320251 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320261 | orchestrator | 2026-02-08 05:56:51.320271 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 05:56:51.320281 | orchestrator | Sunday 08 February 2026 05:56:42 +0000 (0:00:00.416) 0:05:40.099 ******* 2026-02-08 05:56:51.320292 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320300 | orchestrator | 2026-02-08 05:56:51.320309 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 05:56:51.320318 | orchestrator | Sunday 08 February 2026 05:56:42 +0000 (0:00:00.140) 0:05:40.240 ******* 2026-02-08 05:56:51.320326 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:56:51.320335 | orchestrator | 2026-02-08 05:56:51.320343 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 05:56:51.320352 | orchestrator | Sunday 08 February 2026 05:56:42 +0000 (0:00:00.241) 0:05:40.482 ******* 2026-02-08 05:56:51.320360 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-08 05:56:51.320370 | orchestrator | 2026-02-08 05:56:51.320379 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 05:56:51.320387 | orchestrator | Sunday 08 February 2026 05:56:42 +0000 (0:00:00.206) 0:05:40.688 ******* 2026-02-08 05:56:51.320396 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-08 05:56:51.320405 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-08 05:56:51.320414 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-08 05:56:51.320422 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-08 05:56:51.320431 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-08 05:56:51.320464 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-08 05:56:51.320473 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-08 05:56:51.320482 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-08 05:56:51.320506 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 05:56:51.320515 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 05:56:51.320532 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 05:56:51.320540 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 05:56:51.320628 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 05:56:51.320647 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 05:56:51.320657 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-08 05:56:51.320666 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-08 05:56:51.320674 | orchestrator | 2026-02-08 05:56:51.320683 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 05:56:51.320691 | orchestrator | Sunday 08 February 2026 05:56:48 +0000 (0:00:05.748) 0:05:46.437 ******* 2026-02-08 05:56:51.320700 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320709 | orchestrator | 2026-02-08 05:56:51.320717 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 05:56:51.320726 | orchestrator | Sunday 08 February 2026 05:56:48 +0000 (0:00:00.149) 0:05:46.587 ******* 2026-02-08 05:56:51.320734 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320743 | orchestrator | 2026-02-08 05:56:51.320752 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 05:56:51.320763 | orchestrator | Sunday 08 February 2026 05:56:48 +0000 (0:00:00.133) 0:05:46.720 ******* 2026-02-08 05:56:51.320773 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320784 | orchestrator | 2026-02-08 05:56:51.320796 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 05:56:51.320814 | orchestrator | Sunday 08 February 2026 05:56:48 +0000 (0:00:00.149) 0:05:46.869 ******* 2026-02-08 05:56:51.320832 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320849 | orchestrator | 2026-02-08 05:56:51.320866 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 05:56:51.320882 | orchestrator | Sunday 08 February 2026 05:56:48 +0000 (0:00:00.136) 0:05:47.005 ******* 2026-02-08 05:56:51.320900 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320917 | orchestrator | 2026-02-08 05:56:51.320934 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 05:56:51.320951 | orchestrator | Sunday 08 February 2026 05:56:49 +0000 (0:00:00.165) 0:05:47.171 ******* 2026-02-08 05:56:51.320978 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.320998 | orchestrator | 2026-02-08 05:56:51.321016 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 05:56:51.321035 | orchestrator | Sunday 08 February 2026 05:56:49 +0000 (0:00:00.126) 0:05:47.297 ******* 2026-02-08 05:56:51.321054 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321073 | orchestrator | 2026-02-08 05:56:51.321092 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 05:56:51.321111 | orchestrator | Sunday 08 February 2026 05:56:49 +0000 (0:00:00.133) 0:05:47.430 ******* 2026-02-08 05:56:51.321129 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321146 | orchestrator | 2026-02-08 05:56:51.321163 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 05:56:51.321182 | orchestrator | Sunday 08 February 2026 05:56:49 +0000 (0:00:00.429) 0:05:47.860 ******* 2026-02-08 05:56:51.321202 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321220 | orchestrator | 2026-02-08 05:56:51.321238 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 05:56:51.321256 | orchestrator | Sunday 08 February 2026 05:56:49 +0000 (0:00:00.135) 0:05:47.995 ******* 2026-02-08 05:56:51.321275 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321293 | orchestrator | 2026-02-08 05:56:51.321311 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 05:56:51.321331 | orchestrator | Sunday 08 February 2026 05:56:50 +0000 (0:00:00.139) 0:05:48.135 ******* 2026-02-08 05:56:51.321363 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321381 | orchestrator | 2026-02-08 05:56:51.321400 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 05:56:51.321419 | orchestrator | Sunday 08 February 2026 05:56:50 +0000 (0:00:00.166) 0:05:48.301 ******* 2026-02-08 05:56:51.321437 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321456 | orchestrator | 2026-02-08 05:56:51.321474 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 05:56:51.321493 | orchestrator | Sunday 08 February 2026 05:56:50 +0000 (0:00:00.148) 0:05:48.450 ******* 2026-02-08 05:56:51.321511 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321529 | orchestrator | 2026-02-08 05:56:51.321540 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 05:56:51.321585 | orchestrator | Sunday 08 February 2026 05:56:50 +0000 (0:00:00.267) 0:05:48.718 ******* 2026-02-08 05:56:51.321597 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321609 | orchestrator | 2026-02-08 05:56:51.321619 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 05:56:51.321630 | orchestrator | Sunday 08 February 2026 05:56:50 +0000 (0:00:00.137) 0:05:48.855 ******* 2026-02-08 05:56:51.321641 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321651 | orchestrator | 2026-02-08 05:56:51.321662 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 05:56:51.321673 | orchestrator | Sunday 08 February 2026 05:56:51 +0000 (0:00:00.229) 0:05:49.084 ******* 2026-02-08 05:56:51.321683 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321694 | orchestrator | 2026-02-08 05:56:51.321705 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 05:56:51.321759 | orchestrator | Sunday 08 February 2026 05:56:51 +0000 (0:00:00.138) 0:05:49.222 ******* 2026-02-08 05:56:51.321773 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:56:51.321784 | orchestrator | 2026-02-08 05:56:51.321809 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 05:57:13.428706 | orchestrator | Sunday 08 February 2026 05:56:51 +0000 (0:00:00.135) 0:05:49.358 ******* 2026-02-08 05:57:13.428909 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.429774 | orchestrator | 2026-02-08 05:57:13.429830 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 05:57:13.429853 | orchestrator | Sunday 08 February 2026 05:56:51 +0000 (0:00:00.154) 0:05:49.512 ******* 2026-02-08 05:57:13.429874 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.429894 | orchestrator | 2026-02-08 05:57:13.429913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 05:57:13.429951 | orchestrator | Sunday 08 February 2026 05:56:51 +0000 (0:00:00.147) 0:05:49.660 ******* 2026-02-08 05:57:13.429985 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.430005 | orchestrator | 2026-02-08 05:57:13.430090 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 05:57:13.430111 | orchestrator | Sunday 08 February 2026 05:56:51 +0000 (0:00:00.175) 0:05:49.835 ******* 2026-02-08 05:57:13.430132 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.430151 | orchestrator | 2026-02-08 05:57:13.430170 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 05:57:13.430190 | orchestrator | Sunday 08 February 2026 05:56:52 +0000 (0:00:00.416) 0:05:50.252 ******* 2026-02-08 05:57:13.430210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 05:57:13.430229 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 05:57:13.430248 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 05:57:13.430268 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.430287 | orchestrator | 2026-02-08 05:57:13.430306 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 05:57:13.430326 | orchestrator | Sunday 08 February 2026 05:56:52 +0000 (0:00:00.438) 0:05:50.690 ******* 2026-02-08 05:57:13.430413 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 05:57:13.430433 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 05:57:13.430453 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 05:57:13.430472 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.430491 | orchestrator | 2026-02-08 05:57:13.430511 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 05:57:13.430597 | orchestrator | Sunday 08 February 2026 05:56:53 +0000 (0:00:00.420) 0:05:51.111 ******* 2026-02-08 05:57:13.430620 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 05:57:13.430639 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 05:57:13.430658 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 05:57:13.430678 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.430697 | orchestrator | 2026-02-08 05:57:13.430716 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 05:57:13.430735 | orchestrator | Sunday 08 February 2026 05:56:53 +0000 (0:00:00.443) 0:05:51.554 ******* 2026-02-08 05:57:13.430755 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.430775 | orchestrator | 2026-02-08 05:57:13.430813 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 05:57:13.430833 | orchestrator | Sunday 08 February 2026 05:56:53 +0000 (0:00:00.152) 0:05:51.707 ******* 2026-02-08 05:57:13.430853 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-08 05:57:13.430873 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.430892 | orchestrator | 2026-02-08 05:57:13.430912 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 05:57:13.430932 | orchestrator | Sunday 08 February 2026 05:56:54 +0000 (0:00:00.377) 0:05:52.084 ******* 2026-02-08 05:57:13.430951 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:57:13.430970 | orchestrator | 2026-02-08 05:57:13.430987 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-08 05:57:13.431007 | orchestrator | Sunday 08 February 2026 05:56:54 +0000 (0:00:00.866) 0:05:52.951 ******* 2026-02-08 05:57:13.431026 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.431045 | orchestrator | 2026-02-08 05:57:13.431063 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-08 05:57:13.431082 | orchestrator | Sunday 08 February 2026 05:56:55 +0000 (0:00:00.161) 0:05:53.113 ******* 2026-02-08 05:57:13.431100 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-08 05:57:13.431118 | orchestrator | 2026-02-08 05:57:13.431136 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-08 05:57:13.431152 | orchestrator | Sunday 08 February 2026 05:56:55 +0000 (0:00:00.263) 0:05:53.377 ******* 2026-02-08 05:57:13.431169 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-08 05:57:13.431188 | orchestrator | 2026-02-08 05:57:13.431207 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-08 05:57:13.431225 | orchestrator | Sunday 08 February 2026 05:56:57 +0000 (0:00:02.539) 0:05:55.916 ******* 2026-02-08 05:57:13.431241 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.431259 | orchestrator | 2026-02-08 05:57:13.431278 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-08 05:57:13.431296 | orchestrator | Sunday 08 February 2026 05:56:58 +0000 (0:00:00.487) 0:05:56.404 ******* 2026-02-08 05:57:13.431313 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.431331 | orchestrator | 2026-02-08 05:57:13.431348 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-08 05:57:13.431383 | orchestrator | Sunday 08 February 2026 05:56:58 +0000 (0:00:00.165) 0:05:56.569 ******* 2026-02-08 05:57:13.431402 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.431418 | orchestrator | 2026-02-08 05:57:13.431434 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-08 05:57:13.431467 | orchestrator | Sunday 08 February 2026 05:56:58 +0000 (0:00:00.176) 0:05:56.746 ******* 2026-02-08 05:57:13.431485 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:57:13.431503 | orchestrator | 2026-02-08 05:57:13.431551 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-08 05:57:13.431611 | orchestrator | Sunday 08 February 2026 05:56:59 +0000 (0:00:01.013) 0:05:57.759 ******* 2026-02-08 05:57:13.431631 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.431648 | orchestrator | 2026-02-08 05:57:13.431665 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-08 05:57:13.431682 | orchestrator | Sunday 08 February 2026 05:57:00 +0000 (0:00:00.632) 0:05:58.392 ******* 2026-02-08 05:57:13.431699 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.431717 | orchestrator | 2026-02-08 05:57:13.431736 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-08 05:57:13.431755 | orchestrator | Sunday 08 February 2026 05:57:00 +0000 (0:00:00.486) 0:05:58.878 ******* 2026-02-08 05:57:13.431774 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.431792 | orchestrator | 2026-02-08 05:57:13.431811 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-08 05:57:13.431830 | orchestrator | Sunday 08 February 2026 05:57:01 +0000 (0:00:00.545) 0:05:59.423 ******* 2026-02-08 05:57:13.431848 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:57:13.431886 | orchestrator | 2026-02-08 05:57:13.431906 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-08 05:57:13.431925 | orchestrator | Sunday 08 February 2026 05:57:01 +0000 (0:00:00.572) 0:05:59.996 ******* 2026-02-08 05:57:13.431942 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:57:13.431961 | orchestrator | 2026-02-08 05:57:13.431978 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-08 05:57:13.431997 | orchestrator | Sunday 08 February 2026 05:57:02 +0000 (0:00:00.584) 0:06:00.581 ******* 2026-02-08 05:57:13.432016 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 05:57:13.432054 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-08 05:57:13.432073 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 05:57:13.432091 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-08 05:57:13.432108 | orchestrator | 2026-02-08 05:57:13.432141 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-08 05:57:13.432161 | orchestrator | Sunday 08 February 2026 05:57:05 +0000 (0:00:02.993) 0:06:03.574 ******* 2026-02-08 05:57:13.432181 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:57:13.432200 | orchestrator | 2026-02-08 05:57:13.432232 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-08 05:57:13.432252 | orchestrator | Sunday 08 February 2026 05:57:06 +0000 (0:00:01.032) 0:06:04.607 ******* 2026-02-08 05:57:13.432271 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.432290 | orchestrator | 2026-02-08 05:57:13.432309 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-08 05:57:13.432346 | orchestrator | Sunday 08 February 2026 05:57:06 +0000 (0:00:00.147) 0:06:04.755 ******* 2026-02-08 05:57:13.432362 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.432380 | orchestrator | 2026-02-08 05:57:13.432397 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-08 05:57:13.432417 | orchestrator | Sunday 08 February 2026 05:57:06 +0000 (0:00:00.132) 0:06:04.887 ******* 2026-02-08 05:57:13.432436 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.432455 | orchestrator | 2026-02-08 05:57:13.432474 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-08 05:57:13.432492 | orchestrator | Sunday 08 February 2026 05:57:08 +0000 (0:00:01.467) 0:06:06.354 ******* 2026-02-08 05:57:13.432510 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.432529 | orchestrator | 2026-02-08 05:57:13.432545 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-08 05:57:13.432597 | orchestrator | Sunday 08 February 2026 05:57:08 +0000 (0:00:00.475) 0:06:06.830 ******* 2026-02-08 05:57:13.432608 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.432618 | orchestrator | 2026-02-08 05:57:13.432627 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-08 05:57:13.432637 | orchestrator | Sunday 08 February 2026 05:57:08 +0000 (0:00:00.149) 0:06:06.979 ******* 2026-02-08 05:57:13.432647 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-08 05:57:13.432657 | orchestrator | 2026-02-08 05:57:13.432667 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-08 05:57:13.432676 | orchestrator | Sunday 08 February 2026 05:57:09 +0000 (0:00:00.217) 0:06:07.196 ******* 2026-02-08 05:57:13.432686 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.432695 | orchestrator | 2026-02-08 05:57:13.432705 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-08 05:57:13.432714 | orchestrator | Sunday 08 February 2026 05:57:09 +0000 (0:00:00.122) 0:06:07.319 ******* 2026-02-08 05:57:13.432724 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:13.432737 | orchestrator | 2026-02-08 05:57:13.432753 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-08 05:57:13.432770 | orchestrator | Sunday 08 February 2026 05:57:09 +0000 (0:00:00.144) 0:06:07.463 ******* 2026-02-08 05:57:13.432786 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-08 05:57:13.432801 | orchestrator | 2026-02-08 05:57:13.432817 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-08 05:57:13.432832 | orchestrator | Sunday 08 February 2026 05:57:09 +0000 (0:00:00.223) 0:06:07.687 ******* 2026-02-08 05:57:13.432848 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:57:13.432862 | orchestrator | 2026-02-08 05:57:13.432878 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-08 05:57:13.432893 | orchestrator | Sunday 08 February 2026 05:57:10 +0000 (0:00:01.335) 0:06:09.023 ******* 2026-02-08 05:57:13.432907 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.432923 | orchestrator | 2026-02-08 05:57:13.432936 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-08 05:57:13.432952 | orchestrator | Sunday 08 February 2026 05:57:11 +0000 (0:00:00.934) 0:06:09.957 ******* 2026-02-08 05:57:13.432969 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:13.432985 | orchestrator | 2026-02-08 05:57:13.433018 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-08 05:57:57.878566 | orchestrator | Sunday 08 February 2026 05:57:13 +0000 (0:00:01.506) 0:06:11.464 ******* 2026-02-08 05:57:57.878726 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:57:57.879457 | orchestrator | 2026-02-08 05:57:57.879484 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-08 05:57:57.879496 | orchestrator | Sunday 08 February 2026 05:57:15 +0000 (0:00:02.230) 0:06:13.694 ******* 2026-02-08 05:57:57.879503 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-08 05:57:57.879512 | orchestrator | 2026-02-08 05:57:57.879518 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-08 05:57:57.879525 | orchestrator | Sunday 08 February 2026 05:57:16 +0000 (0:00:00.515) 0:06:14.210 ******* 2026-02-08 05:57:57.879532 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-08 05:57:57.879539 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:57.879546 | orchestrator | 2026-02-08 05:57:57.879552 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-08 05:57:57.879560 | orchestrator | Sunday 08 February 2026 05:57:38 +0000 (0:00:21.957) 0:06:36.167 ******* 2026-02-08 05:57:57.879566 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:57.879573 | orchestrator | 2026-02-08 05:57:57.879600 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-08 05:57:57.879631 | orchestrator | Sunday 08 February 2026 05:57:40 +0000 (0:00:01.976) 0:06:38.143 ******* 2026-02-08 05:57:57.879638 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.879645 | orchestrator | 2026-02-08 05:57:57.879652 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-08 05:57:57.879658 | orchestrator | Sunday 08 February 2026 05:57:40 +0000 (0:00:00.139) 0:06:38.283 ******* 2026-02-08 05:57:57.879680 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-08 05:57:57.879693 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-08 05:57:57.879700 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-08 05:57:57.879707 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-08 05:57:57.879715 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-08 05:57:57.879722 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}])  2026-02-08 05:57:57.879730 | orchestrator | 2026-02-08 05:57:57.879737 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-08 05:57:57.879744 | orchestrator | Sunday 08 February 2026 05:57:48 +0000 (0:00:08.675) 0:06:46.959 ******* 2026-02-08 05:57:57.879750 | orchestrator | changed: [testbed-node-1] 2026-02-08 05:57:57.879757 | orchestrator | 2026-02-08 05:57:57.879763 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 05:57:57.879769 | orchestrator | Sunday 08 February 2026 05:57:50 +0000 (0:00:01.474) 0:06:48.433 ******* 2026-02-08 05:57:57.879775 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:57:57.879800 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-08 05:57:57.879807 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-08 05:57:57.879814 | orchestrator | 2026-02-08 05:57:57.879820 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 05:57:57.879827 | orchestrator | Sunday 08 February 2026 05:57:51 +0000 (0:00:01.241) 0:06:49.675 ******* 2026-02-08 05:57:57.879842 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 05:57:57.879848 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 05:57:57.879855 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 05:57:57.879861 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.879868 | orchestrator | 2026-02-08 05:57:57.879874 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-08 05:57:57.879880 | orchestrator | Sunday 08 February 2026 05:57:52 +0000 (0:00:00.480) 0:06:50.156 ******* 2026-02-08 05:57:57.879886 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.879891 | orchestrator | 2026-02-08 05:57:57.879897 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-08 05:57:57.879903 | orchestrator | Sunday 08 February 2026 05:57:52 +0000 (0:00:00.132) 0:06:50.288 ******* 2026-02-08 05:57:57.879909 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:57:57.879915 | orchestrator | 2026-02-08 05:57:57.879922 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 05:57:57.879928 | orchestrator | Sunday 08 February 2026 05:57:53 +0000 (0:00:01.325) 0:06:51.614 ******* 2026-02-08 05:57:57.879934 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.879940 | orchestrator | 2026-02-08 05:57:57.879946 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-08 05:57:57.879952 | orchestrator | Sunday 08 February 2026 05:57:53 +0000 (0:00:00.428) 0:06:52.043 ******* 2026-02-08 05:57:57.879958 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.879964 | orchestrator | 2026-02-08 05:57:57.879970 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-08 05:57:57.879976 | orchestrator | Sunday 08 February 2026 05:57:54 +0000 (0:00:00.142) 0:06:52.185 ******* 2026-02-08 05:57:57.879982 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.879988 | orchestrator | 2026-02-08 05:57:57.879993 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-08 05:57:57.880006 | orchestrator | Sunday 08 February 2026 05:57:54 +0000 (0:00:00.131) 0:06:52.317 ******* 2026-02-08 05:57:57.880012 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.880018 | orchestrator | 2026-02-08 05:57:57.880024 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-08 05:57:57.880031 | orchestrator | Sunday 08 February 2026 05:57:54 +0000 (0:00:00.150) 0:06:52.467 ******* 2026-02-08 05:57:57.880037 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.880043 | orchestrator | 2026-02-08 05:57:57.880050 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-08 05:57:57.880056 | orchestrator | Sunday 08 February 2026 05:57:54 +0000 (0:00:00.128) 0:06:52.595 ******* 2026-02-08 05:57:57.880063 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.880069 | orchestrator | 2026-02-08 05:57:57.880075 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-08 05:57:57.880081 | orchestrator | Sunday 08 February 2026 05:57:54 +0000 (0:00:00.140) 0:06:52.736 ******* 2026-02-08 05:57:57.880087 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:57:57.880093 | orchestrator | 2026-02-08 05:57:57.880099 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-08 05:57:57.880105 | orchestrator | 2026-02-08 05:57:57.880112 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-08 05:57:57.880118 | orchestrator | Sunday 08 February 2026 05:57:55 +0000 (0:00:00.605) 0:06:53.341 ******* 2026-02-08 05:57:57.880125 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:57:57.880131 | orchestrator | 2026-02-08 05:57:57.880137 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-08 05:57:57.880143 | orchestrator | Sunday 08 February 2026 05:57:55 +0000 (0:00:00.475) 0:06:53.816 ******* 2026-02-08 05:57:57.880150 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:57:57.880156 | orchestrator | 2026-02-08 05:57:57.880163 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-08 05:57:57.880177 | orchestrator | Sunday 08 February 2026 05:57:55 +0000 (0:00:00.146) 0:06:53.963 ******* 2026-02-08 05:57:57.880184 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:57:57.880190 | orchestrator | 2026-02-08 05:57:57.880197 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-08 05:57:57.880203 | orchestrator | Sunday 08 February 2026 05:57:56 +0000 (0:00:00.134) 0:06:54.098 ******* 2026-02-08 05:57:57.880210 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:57:57.880216 | orchestrator | 2026-02-08 05:57:57.880223 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 05:57:57.880229 | orchestrator | Sunday 08 February 2026 05:57:56 +0000 (0:00:00.151) 0:06:54.250 ******* 2026-02-08 05:57:57.880236 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-08 05:57:57.880242 | orchestrator | 2026-02-08 05:57:57.880248 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 05:57:57.880254 | orchestrator | Sunday 08 February 2026 05:57:56 +0000 (0:00:00.537) 0:06:54.788 ******* 2026-02-08 05:57:57.880261 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:57:57.880267 | orchestrator | 2026-02-08 05:57:57.880273 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 05:57:57.880280 | orchestrator | Sunday 08 February 2026 05:57:57 +0000 (0:00:00.482) 0:06:55.270 ******* 2026-02-08 05:57:57.880286 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:57:57.880292 | orchestrator | 2026-02-08 05:57:57.880298 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 05:57:57.880304 | orchestrator | Sunday 08 February 2026 05:57:57 +0000 (0:00:00.174) 0:06:55.445 ******* 2026-02-08 05:57:57.880310 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:57:57.880316 | orchestrator | 2026-02-08 05:57:57.880322 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 05:57:57.880338 | orchestrator | Sunday 08 February 2026 05:57:57 +0000 (0:00:00.464) 0:06:55.909 ******* 2026-02-08 05:58:06.691490 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:06.691666 | orchestrator | 2026-02-08 05:58:06.691690 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 05:58:06.691703 | orchestrator | Sunday 08 February 2026 05:57:58 +0000 (0:00:00.170) 0:06:56.079 ******* 2026-02-08 05:58:06.691715 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:06.691726 | orchestrator | 2026-02-08 05:58:06.691738 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 05:58:06.691749 | orchestrator | Sunday 08 February 2026 05:57:58 +0000 (0:00:00.153) 0:06:56.233 ******* 2026-02-08 05:58:06.691761 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:06.691772 | orchestrator | 2026-02-08 05:58:06.691783 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 05:58:06.691795 | orchestrator | Sunday 08 February 2026 05:57:58 +0000 (0:00:00.166) 0:06:56.400 ******* 2026-02-08 05:58:06.691806 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:06.691819 | orchestrator | 2026-02-08 05:58:06.691830 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 05:58:06.691841 | orchestrator | Sunday 08 February 2026 05:57:58 +0000 (0:00:00.152) 0:06:56.552 ******* 2026-02-08 05:58:06.691852 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:06.691863 | orchestrator | 2026-02-08 05:58:06.691874 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 05:58:06.691885 | orchestrator | Sunday 08 February 2026 05:57:58 +0000 (0:00:00.145) 0:06:56.698 ******* 2026-02-08 05:58:06.691896 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:58:06.691920 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:58:06.691931 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 05:58:06.691943 | orchestrator | 2026-02-08 05:58:06.691954 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 05:58:06.691965 | orchestrator | Sunday 08 February 2026 05:57:59 +0000 (0:00:01.082) 0:06:57.780 ******* 2026-02-08 05:58:06.692004 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:06.692016 | orchestrator | 2026-02-08 05:58:06.692030 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 05:58:06.692057 | orchestrator | Sunday 08 February 2026 05:57:59 +0000 (0:00:00.253) 0:06:58.033 ******* 2026-02-08 05:58:06.692071 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:58:06.692084 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:58:06.692097 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 05:58:06.692108 | orchestrator | 2026-02-08 05:58:06.692119 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 05:58:06.692130 | orchestrator | Sunday 08 February 2026 05:58:02 +0000 (0:00:02.468) 0:07:00.502 ******* 2026-02-08 05:58:06.692141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 05:58:06.692153 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 05:58:06.692164 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 05:58:06.692175 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:06.692186 | orchestrator | 2026-02-08 05:58:06.692197 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 05:58:06.692209 | orchestrator | Sunday 08 February 2026 05:58:03 +0000 (0:00:00.762) 0:07:01.264 ******* 2026-02-08 05:58:06.692221 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 05:58:06.692236 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 05:58:06.692247 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 05:58:06.692259 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:06.692270 | orchestrator | 2026-02-08 05:58:06.692281 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 05:58:06.692292 | orchestrator | Sunday 08 February 2026 05:58:04 +0000 (0:00:01.269) 0:07:02.534 ******* 2026-02-08 05:58:06.692306 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:06.692339 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:06.692351 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:06.692363 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:06.692430 | orchestrator | 2026-02-08 05:58:06.692443 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 05:58:06.692455 | orchestrator | Sunday 08 February 2026 05:58:04 +0000 (0:00:00.167) 0:07:02.701 ******* 2026-02-08 05:58:06.692467 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 05:58:00.533528', 'end': '2026-02-08 05:58:00.581506', 'delta': '0:00:00.047978', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 05:58:06.692487 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 05:58:01.684213', 'end': '2026-02-08 05:58:01.732769', 'delta': '0:00:00.048556', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 05:58:06.692500 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '83b6b87b68f7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 05:58:02.258730', 'end': '2026-02-08 05:58:02.308837', 'delta': '0:00:00.050107', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['83b6b87b68f7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 05:58:06.692512 | orchestrator | 2026-02-08 05:58:06.692524 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 05:58:06.692535 | orchestrator | Sunday 08 February 2026 05:58:04 +0000 (0:00:00.207) 0:07:02.909 ******* 2026-02-08 05:58:06.692546 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:06.692557 | orchestrator | 2026-02-08 05:58:06.692568 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 05:58:06.692606 | orchestrator | Sunday 08 February 2026 05:58:05 +0000 (0:00:00.284) 0:07:03.193 ******* 2026-02-08 05:58:06.692626 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:06.692644 | orchestrator | 2026-02-08 05:58:06.692663 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 05:58:06.692682 | orchestrator | Sunday 08 February 2026 05:58:05 +0000 (0:00:00.272) 0:07:03.466 ******* 2026-02-08 05:58:06.692699 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:06.692715 | orchestrator | 2026-02-08 05:58:06.692726 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 05:58:06.692738 | orchestrator | Sunday 08 February 2026 05:58:05 +0000 (0:00:00.144) 0:07:03.611 ******* 2026-02-08 05:58:06.692749 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] 2026-02-08 05:58:06.692760 | orchestrator | 2026-02-08 05:58:06.692771 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 05:58:06.692815 | orchestrator | Sunday 08 February 2026 05:58:06 +0000 (0:00:00.955) 0:07:04.566 ******* 2026-02-08 05:58:06.692836 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:06.692848 | orchestrator | 2026-02-08 05:58:06.692859 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 05:58:06.692879 | orchestrator | Sunday 08 February 2026 05:58:06 +0000 (0:00:00.163) 0:07:04.730 ******* 2026-02-08 05:58:08.847913 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848028 | orchestrator | 2026-02-08 05:58:08.848045 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 05:58:08.848057 | orchestrator | Sunday 08 February 2026 05:58:06 +0000 (0:00:00.133) 0:07:04.863 ******* 2026-02-08 05:58:08.848068 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848078 | orchestrator | 2026-02-08 05:58:08.848088 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 05:58:08.848098 | orchestrator | Sunday 08 February 2026 05:58:07 +0000 (0:00:00.237) 0:07:05.101 ******* 2026-02-08 05:58:08.848108 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848118 | orchestrator | 2026-02-08 05:58:08.848128 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 05:58:08.848138 | orchestrator | Sunday 08 February 2026 05:58:07 +0000 (0:00:00.152) 0:07:05.253 ******* 2026-02-08 05:58:08.848147 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848157 | orchestrator | 2026-02-08 05:58:08.848167 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 05:58:08.848177 | orchestrator | Sunday 08 February 2026 05:58:07 +0000 (0:00:00.160) 0:07:05.413 ******* 2026-02-08 05:58:08.848186 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848196 | orchestrator | 2026-02-08 05:58:08.848206 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 05:58:08.848216 | orchestrator | Sunday 08 February 2026 05:58:07 +0000 (0:00:00.128) 0:07:05.542 ******* 2026-02-08 05:58:08.848225 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848235 | orchestrator | 2026-02-08 05:58:08.848245 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 05:58:08.848254 | orchestrator | Sunday 08 February 2026 05:58:07 +0000 (0:00:00.433) 0:07:05.975 ******* 2026-02-08 05:58:08.848264 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848274 | orchestrator | 2026-02-08 05:58:08.848284 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 05:58:08.848293 | orchestrator | Sunday 08 February 2026 05:58:08 +0000 (0:00:00.152) 0:07:06.127 ******* 2026-02-08 05:58:08.848303 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848313 | orchestrator | 2026-02-08 05:58:08.848338 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 05:58:08.848349 | orchestrator | Sunday 08 February 2026 05:58:08 +0000 (0:00:00.126) 0:07:06.254 ******* 2026-02-08 05:58:08.848359 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848369 | orchestrator | 2026-02-08 05:58:08.848378 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 05:58:08.848388 | orchestrator | Sunday 08 February 2026 05:58:08 +0000 (0:00:00.142) 0:07:06.397 ******* 2026-02-08 05:58:08.848400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:58:08.848414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:58:08.848447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:58:08.848460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 05:58:08.848472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:58:08.848499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:58:08.848510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:58:08.848529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f0c6f27', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 05:58:08.848550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:58:08.848560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 05:58:08.848571 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:08.848638 | orchestrator | 2026-02-08 05:58:08.848652 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 05:58:08.848662 | orchestrator | Sunday 08 February 2026 05:58:08 +0000 (0:00:00.252) 0:07:06.649 ******* 2026-02-08 05:58:08.848680 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005236 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005365 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005392 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005424 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005435 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005445 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005483 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f0c6f27', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005505 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005516 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 05:58:10.005528 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:10.005540 | orchestrator | 2026-02-08 05:58:10.005551 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 05:58:10.005562 | orchestrator | Sunday 08 February 2026 05:58:08 +0000 (0:00:00.241) 0:07:06.891 ******* 2026-02-08 05:58:10.005572 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:10.005635 | orchestrator | 2026-02-08 05:58:10.005647 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 05:58:10.005657 | orchestrator | Sunday 08 February 2026 05:58:09 +0000 (0:00:00.540) 0:07:07.432 ******* 2026-02-08 05:58:10.005667 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:10.005676 | orchestrator | 2026-02-08 05:58:10.005686 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 05:58:10.005696 | orchestrator | Sunday 08 February 2026 05:58:09 +0000 (0:00:00.145) 0:07:07.577 ******* 2026-02-08 05:58:10.005705 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:10.005715 | orchestrator | 2026-02-08 05:58:10.005724 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 05:58:10.005742 | orchestrator | Sunday 08 February 2026 05:58:09 +0000 (0:00:00.467) 0:07:08.044 ******* 2026-02-08 05:58:26.689574 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.689737 | orchestrator | 2026-02-08 05:58:26.689757 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 05:58:26.689770 | orchestrator | Sunday 08 February 2026 05:58:10 +0000 (0:00:00.149) 0:07:08.194 ******* 2026-02-08 05:58:26.689781 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.689793 | orchestrator | 2026-02-08 05:58:26.689805 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 05:58:26.689816 | orchestrator | Sunday 08 February 2026 05:58:10 +0000 (0:00:00.267) 0:07:08.462 ******* 2026-02-08 05:58:26.689827 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.689838 | orchestrator | 2026-02-08 05:58:26.689849 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 05:58:26.689860 | orchestrator | Sunday 08 February 2026 05:58:10 +0000 (0:00:00.169) 0:07:08.632 ******* 2026-02-08 05:58:26.689895 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-08 05:58:26.689908 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-08 05:58:26.689919 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 05:58:26.689929 | orchestrator | 2026-02-08 05:58:26.689941 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 05:58:26.689965 | orchestrator | Sunday 08 February 2026 05:58:11 +0000 (0:00:01.080) 0:07:09.713 ******* 2026-02-08 05:58:26.689977 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 05:58:26.689988 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 05:58:26.689999 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 05:58:26.690010 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.690074 | orchestrator | 2026-02-08 05:58:26.690086 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 05:58:26.690097 | orchestrator | Sunday 08 February 2026 05:58:12 +0000 (0:00:00.465) 0:07:10.178 ******* 2026-02-08 05:58:26.690108 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.690121 | orchestrator | 2026-02-08 05:58:26.690135 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 05:58:26.690148 | orchestrator | Sunday 08 February 2026 05:58:12 +0000 (0:00:00.144) 0:07:10.322 ******* 2026-02-08 05:58:26.690161 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:58:26.690175 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:58:26.690189 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 05:58:26.690202 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 05:58:26.690215 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 05:58:26.690228 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 05:58:26.690241 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 05:58:26.690254 | orchestrator | 2026-02-08 05:58:26.690266 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 05:58:26.690280 | orchestrator | Sunday 08 February 2026 05:58:13 +0000 (0:00:00.813) 0:07:11.135 ******* 2026-02-08 05:58:26.690293 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:58:26.690306 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 05:58:26.690323 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 05:58:26.690342 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 05:58:26.690360 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 05:58:26.690379 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 05:58:26.690397 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 05:58:26.690410 | orchestrator | 2026-02-08 05:58:26.690421 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-08 05:58:26.690431 | orchestrator | Sunday 08 February 2026 05:58:14 +0000 (0:00:01.641) 0:07:12.777 ******* 2026-02-08 05:58:26.690442 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.690453 | orchestrator | 2026-02-08 05:58:26.690465 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-08 05:58:26.690475 | orchestrator | Sunday 08 February 2026 05:58:15 +0000 (0:00:00.286) 0:07:13.064 ******* 2026-02-08 05:58:26.690486 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.690497 | orchestrator | 2026-02-08 05:58:26.690508 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-08 05:58:26.690519 | orchestrator | Sunday 08 February 2026 05:58:15 +0000 (0:00:00.245) 0:07:13.310 ******* 2026-02-08 05:58:26.690543 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.690555 | orchestrator | 2026-02-08 05:58:26.690566 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-08 05:58:26.690576 | orchestrator | Sunday 08 February 2026 05:58:15 +0000 (0:00:00.158) 0:07:13.468 ******* 2026-02-08 05:58:26.690611 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.690624 | orchestrator | 2026-02-08 05:58:26.690635 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-08 05:58:26.690646 | orchestrator | Sunday 08 February 2026 05:58:15 +0000 (0:00:00.227) 0:07:13.696 ******* 2026-02-08 05:58:26.690656 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.690667 | orchestrator | 2026-02-08 05:58:26.690678 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-08 05:58:26.690689 | orchestrator | Sunday 08 February 2026 05:58:15 +0000 (0:00:00.151) 0:07:13.848 ******* 2026-02-08 05:58:26.690719 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 05:58:26.690730 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 05:58:26.690741 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 05:58:26.690752 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.690763 | orchestrator | 2026-02-08 05:58:26.690774 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-08 05:58:26.690785 | orchestrator | Sunday 08 February 2026 05:58:16 +0000 (0:00:00.414) 0:07:14.263 ******* 2026-02-08 05:58:26.690796 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-08 05:58:26.690806 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-08 05:58:26.690817 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-08 05:58:26.690828 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-08 05:58:26.690839 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-08 05:58:26.690849 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-08 05:58:26.690860 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.690871 | orchestrator | 2026-02-08 05:58:26.690888 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-08 05:58:26.690900 | orchestrator | Sunday 08 February 2026 05:58:17 +0000 (0:00:00.989) 0:07:15.253 ******* 2026-02-08 05:58:26.690911 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 05:58:26.690922 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 05:58:26.690932 | orchestrator | 2026-02-08 05:58:26.690943 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-08 05:58:26.690954 | orchestrator | Sunday 08 February 2026 05:58:20 +0000 (0:00:03.534) 0:07:18.788 ******* 2026-02-08 05:58:26.690964 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:58:26.690975 | orchestrator | 2026-02-08 05:58:26.690986 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 05:58:26.691005 | orchestrator | Sunday 08 February 2026 05:58:22 +0000 (0:00:01.843) 0:07:20.631 ******* 2026-02-08 05:58:26.691022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-08 05:58:26.691039 | orchestrator | 2026-02-08 05:58:26.691055 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 05:58:26.691071 | orchestrator | Sunday 08 February 2026 05:58:22 +0000 (0:00:00.202) 0:07:20.833 ******* 2026-02-08 05:58:26.691087 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-08 05:58:26.691103 | orchestrator | 2026-02-08 05:58:26.691119 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 05:58:26.691135 | orchestrator | Sunday 08 February 2026 05:58:23 +0000 (0:00:00.219) 0:07:21.053 ******* 2026-02-08 05:58:26.691166 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:26.691185 | orchestrator | 2026-02-08 05:58:26.691204 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 05:58:26.691222 | orchestrator | Sunday 08 February 2026 05:58:23 +0000 (0:00:00.533) 0:07:21.587 ******* 2026-02-08 05:58:26.691240 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.691254 | orchestrator | 2026-02-08 05:58:26.691264 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 05:58:26.691275 | orchestrator | Sunday 08 February 2026 05:58:23 +0000 (0:00:00.165) 0:07:21.752 ******* 2026-02-08 05:58:26.691286 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.691296 | orchestrator | 2026-02-08 05:58:26.691307 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 05:58:26.691318 | orchestrator | Sunday 08 February 2026 05:58:23 +0000 (0:00:00.160) 0:07:21.913 ******* 2026-02-08 05:58:26.691328 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.691339 | orchestrator | 2026-02-08 05:58:26.691350 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 05:58:26.691361 | orchestrator | Sunday 08 February 2026 05:58:24 +0000 (0:00:00.162) 0:07:22.075 ******* 2026-02-08 05:58:26.691372 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:26.691382 | orchestrator | 2026-02-08 05:58:26.691393 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 05:58:26.691404 | orchestrator | Sunday 08 February 2026 05:58:24 +0000 (0:00:00.577) 0:07:22.653 ******* 2026-02-08 05:58:26.691414 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.691425 | orchestrator | 2026-02-08 05:58:26.691436 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 05:58:26.691446 | orchestrator | Sunday 08 February 2026 05:58:24 +0000 (0:00:00.139) 0:07:22.792 ******* 2026-02-08 05:58:26.691457 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.691468 | orchestrator | 2026-02-08 05:58:26.691479 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 05:58:26.691489 | orchestrator | Sunday 08 February 2026 05:58:24 +0000 (0:00:00.137) 0:07:22.930 ******* 2026-02-08 05:58:26.691500 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:26.691511 | orchestrator | 2026-02-08 05:58:26.691522 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 05:58:26.691532 | orchestrator | Sunday 08 February 2026 05:58:25 +0000 (0:00:00.567) 0:07:23.498 ******* 2026-02-08 05:58:26.691543 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:26.691554 | orchestrator | 2026-02-08 05:58:26.691565 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 05:58:26.691576 | orchestrator | Sunday 08 February 2026 05:58:26 +0000 (0:00:00.594) 0:07:24.092 ******* 2026-02-08 05:58:26.691637 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:26.691659 | orchestrator | 2026-02-08 05:58:26.691676 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 05:58:26.691694 | orchestrator | Sunday 08 February 2026 05:58:26 +0000 (0:00:00.422) 0:07:24.515 ******* 2026-02-08 05:58:26.691726 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:37.338459 | orchestrator | 2026-02-08 05:58:37.338567 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 05:58:37.338584 | orchestrator | Sunday 08 February 2026 05:58:26 +0000 (0:00:00.210) 0:07:24.726 ******* 2026-02-08 05:58:37.338620 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.338633 | orchestrator | 2026-02-08 05:58:37.338645 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 05:58:37.338656 | orchestrator | Sunday 08 February 2026 05:58:26 +0000 (0:00:00.155) 0:07:24.881 ******* 2026-02-08 05:58:37.338667 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.338678 | orchestrator | 2026-02-08 05:58:37.338689 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 05:58:37.338700 | orchestrator | Sunday 08 February 2026 05:58:26 +0000 (0:00:00.141) 0:07:25.023 ******* 2026-02-08 05:58:37.338745 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.338757 | orchestrator | 2026-02-08 05:58:37.338767 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 05:58:37.338778 | orchestrator | Sunday 08 February 2026 05:58:27 +0000 (0:00:00.147) 0:07:25.170 ******* 2026-02-08 05:58:37.338789 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.338799 | orchestrator | 2026-02-08 05:58:37.338810 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 05:58:37.338833 | orchestrator | Sunday 08 February 2026 05:58:27 +0000 (0:00:00.139) 0:07:25.310 ******* 2026-02-08 05:58:37.338844 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.338855 | orchestrator | 2026-02-08 05:58:37.338866 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 05:58:37.338877 | orchestrator | Sunday 08 February 2026 05:58:27 +0000 (0:00:00.167) 0:07:25.477 ******* 2026-02-08 05:58:37.338889 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:37.338901 | orchestrator | 2026-02-08 05:58:37.338911 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 05:58:37.338922 | orchestrator | Sunday 08 February 2026 05:58:27 +0000 (0:00:00.174) 0:07:25.651 ******* 2026-02-08 05:58:37.338933 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:37.338944 | orchestrator | 2026-02-08 05:58:37.338954 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 05:58:37.338965 | orchestrator | Sunday 08 February 2026 05:58:27 +0000 (0:00:00.178) 0:07:25.830 ******* 2026-02-08 05:58:37.339000 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:37.339014 | orchestrator | 2026-02-08 05:58:37.339027 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 05:58:37.339040 | orchestrator | Sunday 08 February 2026 05:58:28 +0000 (0:00:00.271) 0:07:26.101 ******* 2026-02-08 05:58:37.339052 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339065 | orchestrator | 2026-02-08 05:58:37.339077 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 05:58:37.339091 | orchestrator | Sunday 08 February 2026 05:58:28 +0000 (0:00:00.131) 0:07:26.233 ******* 2026-02-08 05:58:37.339103 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339117 | orchestrator | 2026-02-08 05:58:37.339129 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 05:58:37.339142 | orchestrator | Sunday 08 February 2026 05:58:28 +0000 (0:00:00.172) 0:07:26.405 ******* 2026-02-08 05:58:37.339154 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339167 | orchestrator | 2026-02-08 05:58:37.339180 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 05:58:37.339192 | orchestrator | Sunday 08 February 2026 05:58:28 +0000 (0:00:00.480) 0:07:26.886 ******* 2026-02-08 05:58:37.339205 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339217 | orchestrator | 2026-02-08 05:58:37.339230 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 05:58:37.339242 | orchestrator | Sunday 08 February 2026 05:58:28 +0000 (0:00:00.146) 0:07:27.032 ******* 2026-02-08 05:58:37.339255 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339268 | orchestrator | 2026-02-08 05:58:37.339280 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 05:58:37.339293 | orchestrator | Sunday 08 February 2026 05:58:29 +0000 (0:00:00.171) 0:07:27.204 ******* 2026-02-08 05:58:37.339305 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339317 | orchestrator | 2026-02-08 05:58:37.339329 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 05:58:37.339342 | orchestrator | Sunday 08 February 2026 05:58:29 +0000 (0:00:00.121) 0:07:27.326 ******* 2026-02-08 05:58:37.339355 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339367 | orchestrator | 2026-02-08 05:58:37.339378 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 05:58:37.339389 | orchestrator | Sunday 08 February 2026 05:58:29 +0000 (0:00:00.141) 0:07:27.467 ******* 2026-02-08 05:58:37.339408 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339419 | orchestrator | 2026-02-08 05:58:37.339430 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 05:58:37.339440 | orchestrator | Sunday 08 February 2026 05:58:29 +0000 (0:00:00.148) 0:07:27.615 ******* 2026-02-08 05:58:37.339451 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339461 | orchestrator | 2026-02-08 05:58:37.339472 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 05:58:37.339483 | orchestrator | Sunday 08 February 2026 05:58:29 +0000 (0:00:00.143) 0:07:27.759 ******* 2026-02-08 05:58:37.339494 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339504 | orchestrator | 2026-02-08 05:58:37.339515 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 05:58:37.339526 | orchestrator | Sunday 08 February 2026 05:58:29 +0000 (0:00:00.121) 0:07:27.880 ******* 2026-02-08 05:58:37.339537 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339548 | orchestrator | 2026-02-08 05:58:37.339559 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 05:58:37.339569 | orchestrator | Sunday 08 February 2026 05:58:29 +0000 (0:00:00.133) 0:07:28.014 ******* 2026-02-08 05:58:37.339580 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339651 | orchestrator | 2026-02-08 05:58:37.339682 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 05:58:37.339694 | orchestrator | Sunday 08 February 2026 05:58:30 +0000 (0:00:00.192) 0:07:28.206 ******* 2026-02-08 05:58:37.339704 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:37.339713 | orchestrator | 2026-02-08 05:58:37.339723 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 05:58:37.339732 | orchestrator | Sunday 08 February 2026 05:58:31 +0000 (0:00:00.964) 0:07:29.170 ******* 2026-02-08 05:58:37.339742 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:37.339751 | orchestrator | 2026-02-08 05:58:37.339761 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 05:58:37.339771 | orchestrator | Sunday 08 February 2026 05:58:32 +0000 (0:00:01.393) 0:07:30.564 ******* 2026-02-08 05:58:37.339780 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-08 05:58:37.339791 | orchestrator | 2026-02-08 05:58:37.339800 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 05:58:37.339809 | orchestrator | Sunday 08 February 2026 05:58:33 +0000 (0:00:00.514) 0:07:31.079 ******* 2026-02-08 05:58:37.339819 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339828 | orchestrator | 2026-02-08 05:58:37.339838 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 05:58:37.339853 | orchestrator | Sunday 08 February 2026 05:58:33 +0000 (0:00:00.135) 0:07:31.214 ******* 2026-02-08 05:58:37.339862 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.339872 | orchestrator | 2026-02-08 05:58:37.339881 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 05:58:37.339891 | orchestrator | Sunday 08 February 2026 05:58:33 +0000 (0:00:00.138) 0:07:31.353 ******* 2026-02-08 05:58:37.339900 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 05:58:37.339910 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 05:58:37.339920 | orchestrator | 2026-02-08 05:58:37.339929 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 05:58:37.339939 | orchestrator | Sunday 08 February 2026 05:58:34 +0000 (0:00:00.894) 0:07:32.247 ******* 2026-02-08 05:58:37.339948 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:37.339958 | orchestrator | 2026-02-08 05:58:37.339967 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 05:58:37.339977 | orchestrator | Sunday 08 February 2026 05:58:34 +0000 (0:00:00.461) 0:07:32.709 ******* 2026-02-08 05:58:37.339994 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.340005 | orchestrator | 2026-02-08 05:58:37.340014 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 05:58:37.340023 | orchestrator | Sunday 08 February 2026 05:58:34 +0000 (0:00:00.160) 0:07:32.870 ******* 2026-02-08 05:58:37.340033 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.340042 | orchestrator | 2026-02-08 05:58:37.340052 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 05:58:37.340061 | orchestrator | Sunday 08 February 2026 05:58:34 +0000 (0:00:00.125) 0:07:32.996 ******* 2026-02-08 05:58:37.340071 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.340080 | orchestrator | 2026-02-08 05:58:37.340089 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 05:58:37.340099 | orchestrator | Sunday 08 February 2026 05:58:35 +0000 (0:00:00.122) 0:07:33.118 ******* 2026-02-08 05:58:37.340108 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-08 05:58:37.340118 | orchestrator | 2026-02-08 05:58:37.340127 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 05:58:37.340136 | orchestrator | Sunday 08 February 2026 05:58:35 +0000 (0:00:00.210) 0:07:33.328 ******* 2026-02-08 05:58:37.340146 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:37.340155 | orchestrator | 2026-02-08 05:58:37.340165 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 05:58:37.340174 | orchestrator | Sunday 08 February 2026 05:58:36 +0000 (0:00:00.763) 0:07:34.091 ******* 2026-02-08 05:58:37.340184 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 05:58:37.340193 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 05:58:37.340203 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 05:58:37.340212 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.340222 | orchestrator | 2026-02-08 05:58:37.340231 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 05:58:37.340241 | orchestrator | Sunday 08 February 2026 05:58:36 +0000 (0:00:00.186) 0:07:34.277 ******* 2026-02-08 05:58:37.340250 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.340260 | orchestrator | 2026-02-08 05:58:37.340269 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 05:58:37.340279 | orchestrator | Sunday 08 February 2026 05:58:36 +0000 (0:00:00.414) 0:07:34.692 ******* 2026-02-08 05:58:37.340288 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.340298 | orchestrator | 2026-02-08 05:58:37.340307 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 05:58:37.340316 | orchestrator | Sunday 08 February 2026 05:58:36 +0000 (0:00:00.184) 0:07:34.877 ******* 2026-02-08 05:58:37.340326 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.340335 | orchestrator | 2026-02-08 05:58:37.340345 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 05:58:37.340354 | orchestrator | Sunday 08 February 2026 05:58:36 +0000 (0:00:00.158) 0:07:35.035 ******* 2026-02-08 05:58:37.340364 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.340373 | orchestrator | 2026-02-08 05:58:37.340383 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 05:58:37.340392 | orchestrator | Sunday 08 February 2026 05:58:37 +0000 (0:00:00.161) 0:07:35.197 ******* 2026-02-08 05:58:37.340402 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:37.340411 | orchestrator | 2026-02-08 05:58:37.340427 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 05:58:50.878476 | orchestrator | Sunday 08 February 2026 05:58:37 +0000 (0:00:00.177) 0:07:35.375 ******* 2026-02-08 05:58:50.878578 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:50.878594 | orchestrator | 2026-02-08 05:58:50.878647 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 05:58:50.878687 | orchestrator | Sunday 08 February 2026 05:58:38 +0000 (0:00:01.547) 0:07:36.922 ******* 2026-02-08 05:58:50.878698 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:50.878708 | orchestrator | 2026-02-08 05:58:50.878718 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 05:58:50.878728 | orchestrator | Sunday 08 February 2026 05:58:39 +0000 (0:00:00.158) 0:07:37.081 ******* 2026-02-08 05:58:50.878738 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-08 05:58:50.878774 | orchestrator | 2026-02-08 05:58:50.878790 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 05:58:50.878806 | orchestrator | Sunday 08 February 2026 05:58:39 +0000 (0:00:00.219) 0:07:37.300 ******* 2026-02-08 05:58:50.878823 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.878842 | orchestrator | 2026-02-08 05:58:50.878859 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 05:58:50.878888 | orchestrator | Sunday 08 February 2026 05:58:39 +0000 (0:00:00.154) 0:07:37.455 ******* 2026-02-08 05:58:50.878898 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.878908 | orchestrator | 2026-02-08 05:58:50.878918 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 05:58:50.878927 | orchestrator | Sunday 08 February 2026 05:58:39 +0000 (0:00:00.154) 0:07:37.609 ******* 2026-02-08 05:58:50.878937 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.878947 | orchestrator | 2026-02-08 05:58:50.878956 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 05:58:50.878966 | orchestrator | Sunday 08 February 2026 05:58:39 +0000 (0:00:00.162) 0:07:37.771 ******* 2026-02-08 05:58:50.878976 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.878985 | orchestrator | 2026-02-08 05:58:50.878997 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 05:58:50.879013 | orchestrator | Sunday 08 February 2026 05:58:39 +0000 (0:00:00.167) 0:07:37.938 ******* 2026-02-08 05:58:50.879031 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879045 | orchestrator | 2026-02-08 05:58:50.879056 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 05:58:50.879068 | orchestrator | Sunday 08 February 2026 05:58:40 +0000 (0:00:00.502) 0:07:38.440 ******* 2026-02-08 05:58:50.879080 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879091 | orchestrator | 2026-02-08 05:58:50.879102 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 05:58:50.879113 | orchestrator | Sunday 08 February 2026 05:58:40 +0000 (0:00:00.167) 0:07:38.608 ******* 2026-02-08 05:58:50.879124 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879136 | orchestrator | 2026-02-08 05:58:50.879147 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 05:58:50.879159 | orchestrator | Sunday 08 February 2026 05:58:40 +0000 (0:00:00.173) 0:07:38.781 ******* 2026-02-08 05:58:50.879170 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879182 | orchestrator | 2026-02-08 05:58:50.879193 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 05:58:50.879205 | orchestrator | Sunday 08 February 2026 05:58:40 +0000 (0:00:00.155) 0:07:38.937 ******* 2026-02-08 05:58:50.879216 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:58:50.879229 | orchestrator | 2026-02-08 05:58:50.879240 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 05:58:50.879252 | orchestrator | Sunday 08 February 2026 05:58:41 +0000 (0:00:00.261) 0:07:39.199 ******* 2026-02-08 05:58:50.879263 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-08 05:58:50.879275 | orchestrator | 2026-02-08 05:58:50.879286 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 05:58:50.879297 | orchestrator | Sunday 08 February 2026 05:58:41 +0000 (0:00:00.212) 0:07:39.411 ******* 2026-02-08 05:58:50.879309 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-08 05:58:50.879329 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-08 05:58:50.879340 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-08 05:58:50.879352 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-08 05:58:50.879363 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-08 05:58:50.879374 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-08 05:58:50.879384 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-08 05:58:50.879394 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-08 05:58:50.879404 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 05:58:50.879413 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 05:58:50.879423 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 05:58:50.879433 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 05:58:50.879442 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 05:58:50.879452 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 05:58:50.879462 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-08 05:58:50.879472 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-08 05:58:50.879481 | orchestrator | 2026-02-08 05:58:50.879491 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 05:58:50.879500 | orchestrator | Sunday 08 February 2026 05:58:47 +0000 (0:00:05.703) 0:07:45.115 ******* 2026-02-08 05:58:50.879510 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879520 | orchestrator | 2026-02-08 05:58:50.879530 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 05:58:50.879555 | orchestrator | Sunday 08 February 2026 05:58:47 +0000 (0:00:00.144) 0:07:45.259 ******* 2026-02-08 05:58:50.879566 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879575 | orchestrator | 2026-02-08 05:58:50.879585 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 05:58:50.879595 | orchestrator | Sunday 08 February 2026 05:58:47 +0000 (0:00:00.119) 0:07:45.379 ******* 2026-02-08 05:58:50.879644 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879654 | orchestrator | 2026-02-08 05:58:50.879666 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 05:58:50.879683 | orchestrator | Sunday 08 February 2026 05:58:47 +0000 (0:00:00.137) 0:07:45.516 ******* 2026-02-08 05:58:50.879700 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879718 | orchestrator | 2026-02-08 05:58:50.879733 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 05:58:50.879743 | orchestrator | Sunday 08 February 2026 05:58:47 +0000 (0:00:00.139) 0:07:45.656 ******* 2026-02-08 05:58:50.879753 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879763 | orchestrator | 2026-02-08 05:58:50.879772 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 05:58:50.879782 | orchestrator | Sunday 08 February 2026 05:58:47 +0000 (0:00:00.143) 0:07:45.800 ******* 2026-02-08 05:58:50.879798 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879808 | orchestrator | 2026-02-08 05:58:50.879818 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 05:58:50.879828 | orchestrator | Sunday 08 February 2026 05:58:48 +0000 (0:00:00.449) 0:07:46.250 ******* 2026-02-08 05:58:50.879837 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879847 | orchestrator | 2026-02-08 05:58:50.879857 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 05:58:50.879866 | orchestrator | Sunday 08 February 2026 05:58:48 +0000 (0:00:00.133) 0:07:46.384 ******* 2026-02-08 05:58:50.879876 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879885 | orchestrator | 2026-02-08 05:58:50.879895 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 05:58:50.879912 | orchestrator | Sunday 08 February 2026 05:58:48 +0000 (0:00:00.133) 0:07:46.517 ******* 2026-02-08 05:58:50.879921 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879931 | orchestrator | 2026-02-08 05:58:50.879941 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 05:58:50.879950 | orchestrator | Sunday 08 February 2026 05:58:48 +0000 (0:00:00.161) 0:07:46.679 ******* 2026-02-08 05:58:50.879959 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.879969 | orchestrator | 2026-02-08 05:58:50.879978 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 05:58:50.879988 | orchestrator | Sunday 08 February 2026 05:58:48 +0000 (0:00:00.127) 0:07:46.806 ******* 2026-02-08 05:58:50.879997 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880007 | orchestrator | 2026-02-08 05:58:50.880016 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 05:58:50.880028 | orchestrator | Sunday 08 February 2026 05:58:48 +0000 (0:00:00.135) 0:07:46.941 ******* 2026-02-08 05:58:50.880045 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880061 | orchestrator | 2026-02-08 05:58:50.880077 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 05:58:50.880087 | orchestrator | Sunday 08 February 2026 05:58:49 +0000 (0:00:00.156) 0:07:47.098 ******* 2026-02-08 05:58:50.880097 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880106 | orchestrator | 2026-02-08 05:58:50.880116 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 05:58:50.880125 | orchestrator | Sunday 08 February 2026 05:58:49 +0000 (0:00:00.254) 0:07:47.353 ******* 2026-02-08 05:58:50.880135 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880144 | orchestrator | 2026-02-08 05:58:50.880154 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 05:58:50.880163 | orchestrator | Sunday 08 February 2026 05:58:49 +0000 (0:00:00.153) 0:07:47.507 ******* 2026-02-08 05:58:50.880173 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880183 | orchestrator | 2026-02-08 05:58:50.880192 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 05:58:50.880201 | orchestrator | Sunday 08 February 2026 05:58:49 +0000 (0:00:00.248) 0:07:47.755 ******* 2026-02-08 05:58:50.880211 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880220 | orchestrator | 2026-02-08 05:58:50.880230 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 05:58:50.880240 | orchestrator | Sunday 08 February 2026 05:58:49 +0000 (0:00:00.135) 0:07:47.891 ******* 2026-02-08 05:58:50.880249 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880259 | orchestrator | 2026-02-08 05:58:50.880289 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 05:58:50.880300 | orchestrator | Sunday 08 February 2026 05:58:49 +0000 (0:00:00.132) 0:07:48.023 ******* 2026-02-08 05:58:50.880310 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880320 | orchestrator | 2026-02-08 05:58:50.880330 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 05:58:50.880339 | orchestrator | Sunday 08 February 2026 05:58:50 +0000 (0:00:00.151) 0:07:48.174 ******* 2026-02-08 05:58:50.880349 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880358 | orchestrator | 2026-02-08 05:58:50.880368 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 05:58:50.880377 | orchestrator | Sunday 08 February 2026 05:58:50 +0000 (0:00:00.438) 0:07:48.613 ******* 2026-02-08 05:58:50.880387 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880396 | orchestrator | 2026-02-08 05:58:50.880406 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 05:58:50.880416 | orchestrator | Sunday 08 February 2026 05:58:50 +0000 (0:00:00.142) 0:07:48.756 ******* 2026-02-08 05:58:50.880426 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:58:50.880442 | orchestrator | 2026-02-08 05:58:50.880473 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 05:59:18.435234 | orchestrator | Sunday 08 February 2026 05:58:50 +0000 (0:00:00.158) 0:07:48.914 ******* 2026-02-08 05:59:18.435316 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 05:59:18.435323 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 05:59:18.435328 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 05:59:18.435334 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:18.435339 | orchestrator | 2026-02-08 05:59:18.435346 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 05:59:18.435351 | orchestrator | Sunday 08 February 2026 05:58:51 +0000 (0:00:00.432) 0:07:49.347 ******* 2026-02-08 05:59:18.435355 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 05:59:18.435360 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 05:59:18.435365 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 05:59:18.435370 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:18.435374 | orchestrator | 2026-02-08 05:59:18.435379 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 05:59:18.435396 | orchestrator | Sunday 08 February 2026 05:58:51 +0000 (0:00:00.441) 0:07:49.788 ******* 2026-02-08 05:59:18.435401 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 05:59:18.435405 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 05:59:18.435410 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 05:59:18.435414 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:18.435419 | orchestrator | 2026-02-08 05:59:18.435424 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 05:59:18.435428 | orchestrator | Sunday 08 February 2026 05:58:52 +0000 (0:00:00.440) 0:07:50.229 ******* 2026-02-08 05:59:18.435433 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:18.435437 | orchestrator | 2026-02-08 05:59:18.435442 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 05:59:18.435447 | orchestrator | Sunday 08 February 2026 05:58:52 +0000 (0:00:00.165) 0:07:50.395 ******* 2026-02-08 05:59:18.435452 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-08 05:59:18.435457 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:18.435461 | orchestrator | 2026-02-08 05:59:18.435466 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 05:59:18.435471 | orchestrator | Sunday 08 February 2026 05:58:52 +0000 (0:00:00.342) 0:07:50.738 ******* 2026-02-08 05:59:18.435476 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:59:18.435480 | orchestrator | 2026-02-08 05:59:18.435485 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-08 05:59:18.435490 | orchestrator | Sunday 08 February 2026 05:58:53 +0000 (0:00:00.797) 0:07:51.535 ******* 2026-02-08 05:59:18.435494 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435499 | orchestrator | 2026-02-08 05:59:18.435504 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-08 05:59:18.435509 | orchestrator | Sunday 08 February 2026 05:58:53 +0000 (0:00:00.161) 0:07:51.697 ******* 2026-02-08 05:59:18.435513 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-08 05:59:18.435519 | orchestrator | 2026-02-08 05:59:18.435523 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-08 05:59:18.435528 | orchestrator | Sunday 08 February 2026 05:58:54 +0000 (0:00:00.535) 0:07:52.232 ******* 2026-02-08 05:59:18.435533 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435537 | orchestrator | 2026-02-08 05:59:18.435542 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-08 05:59:18.435547 | orchestrator | Sunday 08 February 2026 05:58:56 +0000 (0:00:02.250) 0:07:54.483 ******* 2026-02-08 05:59:18.435567 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:18.435572 | orchestrator | 2026-02-08 05:59:18.435577 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-08 05:59:18.435581 | orchestrator | Sunday 08 February 2026 05:58:56 +0000 (0:00:00.181) 0:07:54.664 ******* 2026-02-08 05:59:18.435586 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435591 | orchestrator | 2026-02-08 05:59:18.435595 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-08 05:59:18.435600 | orchestrator | Sunday 08 February 2026 05:58:56 +0000 (0:00:00.163) 0:07:54.828 ******* 2026-02-08 05:59:18.435604 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435655 | orchestrator | 2026-02-08 05:59:18.435660 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-08 05:59:18.435665 | orchestrator | Sunday 08 February 2026 05:58:56 +0000 (0:00:00.195) 0:07:55.023 ******* 2026-02-08 05:59:18.435670 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:59:18.435675 | orchestrator | 2026-02-08 05:59:18.435679 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-08 05:59:18.435684 | orchestrator | Sunday 08 February 2026 05:58:58 +0000 (0:00:01.034) 0:07:56.058 ******* 2026-02-08 05:59:18.435688 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435693 | orchestrator | 2026-02-08 05:59:18.435698 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-08 05:59:18.435702 | orchestrator | Sunday 08 February 2026 05:58:58 +0000 (0:00:00.633) 0:07:56.691 ******* 2026-02-08 05:59:18.435707 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435712 | orchestrator | 2026-02-08 05:59:18.435716 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-08 05:59:18.435721 | orchestrator | Sunday 08 February 2026 05:58:59 +0000 (0:00:00.505) 0:07:57.196 ******* 2026-02-08 05:59:18.435725 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435730 | orchestrator | 2026-02-08 05:59:18.435735 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-08 05:59:18.435739 | orchestrator | Sunday 08 February 2026 05:58:59 +0000 (0:00:00.444) 0:07:57.641 ******* 2026-02-08 05:59:18.435744 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:59:18.435749 | orchestrator | 2026-02-08 05:59:18.435753 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-08 05:59:18.435769 | orchestrator | Sunday 08 February 2026 05:59:00 +0000 (0:00:00.560) 0:07:58.201 ******* 2026-02-08 05:59:18.435774 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-08 05:59:18.435778 | orchestrator | 2026-02-08 05:59:18.435783 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-08 05:59:18.435788 | orchestrator | Sunday 08 February 2026 05:59:00 +0000 (0:00:00.587) 0:07:58.789 ******* 2026-02-08 05:59:18.435792 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 05:59:18.435797 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-08 05:59:18.435802 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-08 05:59:18.435808 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-08 05:59:18.435813 | orchestrator | 2026-02-08 05:59:18.435818 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-08 05:59:18.435824 | orchestrator | Sunday 08 February 2026 05:59:04 +0000 (0:00:03.280) 0:08:02.069 ******* 2026-02-08 05:59:18.435829 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:59:18.435834 | orchestrator | 2026-02-08 05:59:18.435843 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-08 05:59:18.435849 | orchestrator | Sunday 08 February 2026 05:59:05 +0000 (0:00:01.102) 0:08:03.172 ******* 2026-02-08 05:59:18.435854 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435860 | orchestrator | 2026-02-08 05:59:18.435865 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-08 05:59:18.435870 | orchestrator | Sunday 08 February 2026 05:59:05 +0000 (0:00:00.461) 0:08:03.633 ******* 2026-02-08 05:59:18.435880 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435886 | orchestrator | 2026-02-08 05:59:18.435892 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-08 05:59:18.435897 | orchestrator | Sunday 08 February 2026 05:59:05 +0000 (0:00:00.169) 0:08:03.802 ******* 2026-02-08 05:59:18.435902 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435908 | orchestrator | 2026-02-08 05:59:18.435913 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-08 05:59:18.435919 | orchestrator | Sunday 08 February 2026 05:59:06 +0000 (0:00:00.884) 0:08:04.687 ******* 2026-02-08 05:59:18.435924 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.435930 | orchestrator | 2026-02-08 05:59:18.435935 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-08 05:59:18.435941 | orchestrator | Sunday 08 February 2026 05:59:07 +0000 (0:00:00.553) 0:08:05.240 ******* 2026-02-08 05:59:18.435945 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:18.435950 | orchestrator | 2026-02-08 05:59:18.435955 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-08 05:59:18.435959 | orchestrator | Sunday 08 February 2026 05:59:07 +0000 (0:00:00.141) 0:08:05.382 ******* 2026-02-08 05:59:18.435964 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-08 05:59:18.435968 | orchestrator | 2026-02-08 05:59:18.435973 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-08 05:59:18.435978 | orchestrator | Sunday 08 February 2026 05:59:07 +0000 (0:00:00.263) 0:08:05.645 ******* 2026-02-08 05:59:18.435982 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:18.435987 | orchestrator | 2026-02-08 05:59:18.435992 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-08 05:59:18.435996 | orchestrator | Sunday 08 February 2026 05:59:07 +0000 (0:00:00.120) 0:08:05.766 ******* 2026-02-08 05:59:18.436001 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:18.436005 | orchestrator | 2026-02-08 05:59:18.436010 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-08 05:59:18.436015 | orchestrator | Sunday 08 February 2026 05:59:07 +0000 (0:00:00.155) 0:08:05.921 ******* 2026-02-08 05:59:18.436019 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-08 05:59:18.436024 | orchestrator | 2026-02-08 05:59:18.436029 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-08 05:59:18.436033 | orchestrator | Sunday 08 February 2026 05:59:08 +0000 (0:00:00.204) 0:08:06.125 ******* 2026-02-08 05:59:18.436038 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.436042 | orchestrator | 2026-02-08 05:59:18.436047 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-08 05:59:18.436052 | orchestrator | Sunday 08 February 2026 05:59:09 +0000 (0:00:01.578) 0:08:07.703 ******* 2026-02-08 05:59:18.436056 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.436061 | orchestrator | 2026-02-08 05:59:18.436066 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-08 05:59:18.436070 | orchestrator | Sunday 08 February 2026 05:59:10 +0000 (0:00:00.987) 0:08:08.691 ******* 2026-02-08 05:59:18.436075 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.436079 | orchestrator | 2026-02-08 05:59:18.436084 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-08 05:59:18.436089 | orchestrator | Sunday 08 February 2026 05:59:12 +0000 (0:00:01.427) 0:08:10.118 ******* 2026-02-08 05:59:18.436093 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:59:18.436098 | orchestrator | 2026-02-08 05:59:18.436102 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-08 05:59:18.436107 | orchestrator | Sunday 08 February 2026 05:59:15 +0000 (0:00:02.946) 0:08:13.065 ******* 2026-02-08 05:59:18.436112 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-08 05:59:18.436116 | orchestrator | 2026-02-08 05:59:18.436121 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-08 05:59:18.436129 | orchestrator | Sunday 08 February 2026 05:59:15 +0000 (0:00:00.246) 0:08:13.311 ******* 2026-02-08 05:59:18.436133 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.436138 | orchestrator | 2026-02-08 05:59:18.436143 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-08 05:59:18.436147 | orchestrator | Sunday 08 February 2026 05:59:16 +0000 (0:00:01.240) 0:08:14.552 ******* 2026-02-08 05:59:18.436152 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:18.436156 | orchestrator | 2026-02-08 05:59:18.436165 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-08 05:59:37.578309 | orchestrator | Sunday 08 February 2026 05:59:18 +0000 (0:00:01.919) 0:08:16.471 ******* 2026-02-08 05:59:37.578419 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.578436 | orchestrator | 2026-02-08 05:59:37.578450 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-08 05:59:37.578462 | orchestrator | Sunday 08 February 2026 05:59:18 +0000 (0:00:00.141) 0:08:16.613 ******* 2026-02-08 05:59:37.578475 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-08 05:59:37.578507 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-08 05:59:37.578519 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-08 05:59:37.578531 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-08 05:59:37.578544 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-08 05:59:37.578556 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__54010be108205bd9450ab34ee8857f1f35bfaf4e'}])  2026-02-08 05:59:37.578569 | orchestrator | 2026-02-08 05:59:37.578580 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-08 05:59:37.578591 | orchestrator | Sunday 08 February 2026 05:59:27 +0000 (0:00:08.807) 0:08:25.420 ******* 2026-02-08 05:59:37.578602 | orchestrator | changed: [testbed-node-2] 2026-02-08 05:59:37.578647 | orchestrator | 2026-02-08 05:59:37.578661 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 05:59:37.578672 | orchestrator | Sunday 08 February 2026 05:59:28 +0000 (0:00:01.528) 0:08:26.948 ******* 2026-02-08 05:59:37.578708 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 05:59:37.578720 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-08 05:59:37.578731 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-08 05:59:37.578742 | orchestrator | 2026-02-08 05:59:37.578753 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 05:59:37.578764 | orchestrator | Sunday 08 February 2026 05:59:30 +0000 (0:00:01.262) 0:08:28.211 ******* 2026-02-08 05:59:37.578775 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 05:59:37.578787 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 05:59:37.578798 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 05:59:37.578809 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.578820 | orchestrator | 2026-02-08 05:59:37.578831 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-08 05:59:37.578842 | orchestrator | Sunday 08 February 2026 05:59:30 +0000 (0:00:00.503) 0:08:28.715 ******* 2026-02-08 05:59:37.578854 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.578868 | orchestrator | 2026-02-08 05:59:37.578880 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-08 05:59:37.578893 | orchestrator | Sunday 08 February 2026 05:59:30 +0000 (0:00:00.144) 0:08:28.860 ******* 2026-02-08 05:59:37.578906 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:37.578920 | orchestrator | 2026-02-08 05:59:37.578948 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 05:59:37.578961 | orchestrator | Sunday 08 February 2026 05:59:32 +0000 (0:00:01.981) 0:08:30.841 ******* 2026-02-08 05:59:37.578974 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.578986 | orchestrator | 2026-02-08 05:59:37.578999 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-08 05:59:37.579012 | orchestrator | Sunday 08 February 2026 05:59:32 +0000 (0:00:00.151) 0:08:30.993 ******* 2026-02-08 05:59:37.579024 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.579037 | orchestrator | 2026-02-08 05:59:37.579050 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-08 05:59:37.579062 | orchestrator | Sunday 08 February 2026 05:59:33 +0000 (0:00:00.135) 0:08:31.128 ******* 2026-02-08 05:59:37.579074 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.579088 | orchestrator | 2026-02-08 05:59:37.579100 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-08 05:59:37.579113 | orchestrator | Sunday 08 February 2026 05:59:33 +0000 (0:00:00.129) 0:08:31.258 ******* 2026-02-08 05:59:37.579126 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.579139 | orchestrator | 2026-02-08 05:59:37.579159 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-08 05:59:37.579172 | orchestrator | Sunday 08 February 2026 05:59:33 +0000 (0:00:00.135) 0:08:31.393 ******* 2026-02-08 05:59:37.579185 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.579198 | orchestrator | 2026-02-08 05:59:37.579211 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-08 05:59:37.579222 | orchestrator | Sunday 08 February 2026 05:59:33 +0000 (0:00:00.156) 0:08:31.549 ******* 2026-02-08 05:59:37.579233 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.579244 | orchestrator | 2026-02-08 05:59:37.579255 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-08 05:59:37.579266 | orchestrator | Sunday 08 February 2026 05:59:33 +0000 (0:00:00.126) 0:08:31.675 ******* 2026-02-08 05:59:37.579277 | orchestrator | skipping: [testbed-node-2] 2026-02-08 05:59:37.579288 | orchestrator | 2026-02-08 05:59:37.579299 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-08 05:59:37.579309 | orchestrator | 2026-02-08 05:59:37.579320 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-08 05:59:37.579341 | orchestrator | Sunday 08 February 2026 05:59:34 +0000 (0:00:00.755) 0:08:32.431 ******* 2026-02-08 05:59:37.579352 | orchestrator | ok: [testbed-node-0] 2026-02-08 05:59:37.579363 | orchestrator | ok: [testbed-node-1] 2026-02-08 05:59:37.579374 | orchestrator | ok: [testbed-node-2] 2026-02-08 05:59:37.579385 | orchestrator | 2026-02-08 05:59:37.579396 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-08 05:59:37.579407 | orchestrator | 2026-02-08 05:59:37.579418 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-08 05:59:37.579430 | orchestrator | Sunday 08 February 2026 05:59:35 +0000 (0:00:01.067) 0:08:33.498 ******* 2026-02-08 05:59:37.579441 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579452 | orchestrator | 2026-02-08 05:59:37.579463 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 05:59:37.579474 | orchestrator | Sunday 08 February 2026 05:59:35 +0000 (0:00:00.265) 0:08:33.764 ******* 2026-02-08 05:59:37.579484 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579495 | orchestrator | 2026-02-08 05:59:37.579506 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 05:59:37.579517 | orchestrator | Sunday 08 February 2026 05:59:35 +0000 (0:00:00.215) 0:08:33.979 ******* 2026-02-08 05:59:37.579528 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579539 | orchestrator | 2026-02-08 05:59:37.579550 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 05:59:37.579561 | orchestrator | Sunday 08 February 2026 05:59:36 +0000 (0:00:00.135) 0:08:34.115 ******* 2026-02-08 05:59:37.579572 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579583 | orchestrator | 2026-02-08 05:59:37.579594 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 05:59:37.579605 | orchestrator | Sunday 08 February 2026 05:59:36 +0000 (0:00:00.135) 0:08:34.250 ******* 2026-02-08 05:59:37.579664 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579677 | orchestrator | 2026-02-08 05:59:37.579688 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 05:59:37.579699 | orchestrator | Sunday 08 February 2026 05:59:36 +0000 (0:00:00.196) 0:08:34.447 ******* 2026-02-08 05:59:37.579709 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579721 | orchestrator | 2026-02-08 05:59:37.579731 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 05:59:37.579742 | orchestrator | Sunday 08 February 2026 05:59:36 +0000 (0:00:00.143) 0:08:34.591 ******* 2026-02-08 05:59:37.579753 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579764 | orchestrator | 2026-02-08 05:59:37.579775 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 05:59:37.579786 | orchestrator | Sunday 08 February 2026 05:59:36 +0000 (0:00:00.126) 0:08:34.718 ******* 2026-02-08 05:59:37.579797 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579807 | orchestrator | 2026-02-08 05:59:37.579818 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 05:59:37.579829 | orchestrator | Sunday 08 February 2026 05:59:36 +0000 (0:00:00.154) 0:08:34.872 ******* 2026-02-08 05:59:37.579840 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579851 | orchestrator | 2026-02-08 05:59:37.579862 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 05:59:37.579873 | orchestrator | Sunday 08 February 2026 05:59:36 +0000 (0:00:00.135) 0:08:35.008 ******* 2026-02-08 05:59:37.579884 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579895 | orchestrator | 2026-02-08 05:59:37.579906 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 05:59:37.579917 | orchestrator | Sunday 08 February 2026 05:59:37 +0000 (0:00:00.446) 0:08:35.454 ******* 2026-02-08 05:59:37.579928 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:37.579938 | orchestrator | 2026-02-08 05:59:37.579950 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 05:59:37.579976 | orchestrator | Sunday 08 February 2026 05:59:37 +0000 (0:00:00.158) 0:08:35.612 ******* 2026-02-08 05:59:44.791454 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.791562 | orchestrator | 2026-02-08 05:59:44.791581 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 05:59:44.791594 | orchestrator | Sunday 08 February 2026 05:59:37 +0000 (0:00:00.208) 0:08:35.821 ******* 2026-02-08 05:59:44.791605 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.791664 | orchestrator | 2026-02-08 05:59:44.791679 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 05:59:44.791690 | orchestrator | Sunday 08 February 2026 05:59:37 +0000 (0:00:00.148) 0:08:35.969 ******* 2026-02-08 05:59:44.791701 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.791712 | orchestrator | 2026-02-08 05:59:44.791723 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 05:59:44.791734 | orchestrator | Sunday 08 February 2026 05:59:38 +0000 (0:00:00.158) 0:08:36.128 ******* 2026-02-08 05:59:44.791745 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.791756 | orchestrator | 2026-02-08 05:59:44.791767 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 05:59:44.791795 | orchestrator | Sunday 08 February 2026 05:59:38 +0000 (0:00:00.138) 0:08:36.266 ******* 2026-02-08 05:59:44.791806 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.791817 | orchestrator | 2026-02-08 05:59:44.791828 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 05:59:44.791839 | orchestrator | Sunday 08 February 2026 05:59:38 +0000 (0:00:00.134) 0:08:36.401 ******* 2026-02-08 05:59:44.791850 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.791861 | orchestrator | 2026-02-08 05:59:44.791872 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 05:59:44.791883 | orchestrator | Sunday 08 February 2026 05:59:38 +0000 (0:00:00.145) 0:08:36.546 ******* 2026-02-08 05:59:44.791894 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.791905 | orchestrator | 2026-02-08 05:59:44.791916 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 05:59:44.791926 | orchestrator | Sunday 08 February 2026 05:59:38 +0000 (0:00:00.146) 0:08:36.692 ******* 2026-02-08 05:59:44.791937 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.791948 | orchestrator | 2026-02-08 05:59:44.791959 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 05:59:44.791971 | orchestrator | Sunday 08 February 2026 05:59:38 +0000 (0:00:00.142) 0:08:36.835 ******* 2026-02-08 05:59:44.791982 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.791995 | orchestrator | 2026-02-08 05:59:44.792007 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 05:59:44.792019 | orchestrator | Sunday 08 February 2026 05:59:38 +0000 (0:00:00.132) 0:08:36.968 ******* 2026-02-08 05:59:44.792032 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792045 | orchestrator | 2026-02-08 05:59:44.792057 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 05:59:44.792070 | orchestrator | Sunday 08 February 2026 05:59:39 +0000 (0:00:00.188) 0:08:37.156 ******* 2026-02-08 05:59:44.792082 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792095 | orchestrator | 2026-02-08 05:59:44.792107 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 05:59:44.792119 | orchestrator | Sunday 08 February 2026 05:59:39 +0000 (0:00:00.457) 0:08:37.613 ******* 2026-02-08 05:59:44.792132 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792144 | orchestrator | 2026-02-08 05:59:44.792157 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 05:59:44.792167 | orchestrator | Sunday 08 February 2026 05:59:39 +0000 (0:00:00.133) 0:08:37.747 ******* 2026-02-08 05:59:44.792178 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792189 | orchestrator | 2026-02-08 05:59:44.792200 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 05:59:44.792235 | orchestrator | Sunday 08 February 2026 05:59:39 +0000 (0:00:00.245) 0:08:37.992 ******* 2026-02-08 05:59:44.792247 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792258 | orchestrator | 2026-02-08 05:59:44.792268 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 05:59:44.792279 | orchestrator | Sunday 08 February 2026 05:59:40 +0000 (0:00:00.157) 0:08:38.150 ******* 2026-02-08 05:59:44.792290 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792300 | orchestrator | 2026-02-08 05:59:44.792311 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 05:59:44.792322 | orchestrator | Sunday 08 February 2026 05:59:40 +0000 (0:00:00.156) 0:08:38.306 ******* 2026-02-08 05:59:44.792333 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792344 | orchestrator | 2026-02-08 05:59:44.792354 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 05:59:44.792365 | orchestrator | Sunday 08 February 2026 05:59:40 +0000 (0:00:00.146) 0:08:38.453 ******* 2026-02-08 05:59:44.792376 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792387 | orchestrator | 2026-02-08 05:59:44.792397 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 05:59:44.792408 | orchestrator | Sunday 08 February 2026 05:59:40 +0000 (0:00:00.140) 0:08:38.594 ******* 2026-02-08 05:59:44.792419 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792430 | orchestrator | 2026-02-08 05:59:44.792440 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 05:59:44.792451 | orchestrator | Sunday 08 February 2026 05:59:40 +0000 (0:00:00.137) 0:08:38.731 ******* 2026-02-08 05:59:44.792462 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792473 | orchestrator | 2026-02-08 05:59:44.792484 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 05:59:44.792494 | orchestrator | Sunday 08 February 2026 05:59:40 +0000 (0:00:00.139) 0:08:38.871 ******* 2026-02-08 05:59:44.792505 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792516 | orchestrator | 2026-02-08 05:59:44.792527 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 05:59:44.792537 | orchestrator | Sunday 08 February 2026 05:59:40 +0000 (0:00:00.135) 0:08:39.006 ******* 2026-02-08 05:59:44.792548 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792559 | orchestrator | 2026-02-08 05:59:44.792588 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 05:59:44.792600 | orchestrator | Sunday 08 February 2026 05:59:41 +0000 (0:00:00.221) 0:08:39.228 ******* 2026-02-08 05:59:44.792611 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792654 | orchestrator | 2026-02-08 05:59:44.792673 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 05:59:44.792692 | orchestrator | Sunday 08 February 2026 05:59:41 +0000 (0:00:00.144) 0:08:39.372 ******* 2026-02-08 05:59:44.792710 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792726 | orchestrator | 2026-02-08 05:59:44.792737 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 05:59:44.792748 | orchestrator | Sunday 08 February 2026 05:59:41 +0000 (0:00:00.467) 0:08:39.840 ******* 2026-02-08 05:59:44.792759 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792770 | orchestrator | 2026-02-08 05:59:44.792781 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 05:59:44.792798 | orchestrator | Sunday 08 February 2026 05:59:41 +0000 (0:00:00.135) 0:08:39.976 ******* 2026-02-08 05:59:44.792809 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792820 | orchestrator | 2026-02-08 05:59:44.792831 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 05:59:44.792842 | orchestrator | Sunday 08 February 2026 05:59:42 +0000 (0:00:00.155) 0:08:40.131 ******* 2026-02-08 05:59:44.792853 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792869 | orchestrator | 2026-02-08 05:59:44.792886 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 05:59:44.792916 | orchestrator | Sunday 08 February 2026 05:59:42 +0000 (0:00:00.149) 0:08:40.281 ******* 2026-02-08 05:59:44.792934 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.792952 | orchestrator | 2026-02-08 05:59:44.792971 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 05:59:44.792988 | orchestrator | Sunday 08 February 2026 05:59:42 +0000 (0:00:00.163) 0:08:40.444 ******* 2026-02-08 05:59:44.792999 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793010 | orchestrator | 2026-02-08 05:59:44.793021 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 05:59:44.793033 | orchestrator | Sunday 08 February 2026 05:59:42 +0000 (0:00:00.142) 0:08:40.587 ******* 2026-02-08 05:59:44.793044 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793054 | orchestrator | 2026-02-08 05:59:44.793065 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 05:59:44.793076 | orchestrator | Sunday 08 February 2026 05:59:42 +0000 (0:00:00.150) 0:08:40.738 ******* 2026-02-08 05:59:44.793087 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793098 | orchestrator | 2026-02-08 05:59:44.793108 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 05:59:44.793119 | orchestrator | Sunday 08 February 2026 05:59:42 +0000 (0:00:00.144) 0:08:40.882 ******* 2026-02-08 05:59:44.793130 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793141 | orchestrator | 2026-02-08 05:59:44.793152 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 05:59:44.793163 | orchestrator | Sunday 08 February 2026 05:59:42 +0000 (0:00:00.140) 0:08:41.023 ******* 2026-02-08 05:59:44.793173 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793184 | orchestrator | 2026-02-08 05:59:44.793195 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 05:59:44.793205 | orchestrator | Sunday 08 February 2026 05:59:43 +0000 (0:00:00.145) 0:08:41.169 ******* 2026-02-08 05:59:44.793216 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793227 | orchestrator | 2026-02-08 05:59:44.793238 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 05:59:44.793249 | orchestrator | Sunday 08 February 2026 05:59:43 +0000 (0:00:00.145) 0:08:41.314 ******* 2026-02-08 05:59:44.793259 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793270 | orchestrator | 2026-02-08 05:59:44.793281 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 05:59:44.793292 | orchestrator | Sunday 08 February 2026 05:59:43 +0000 (0:00:00.145) 0:08:41.459 ******* 2026-02-08 05:59:44.793302 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793313 | orchestrator | 2026-02-08 05:59:44.793324 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 05:59:44.793335 | orchestrator | Sunday 08 February 2026 05:59:43 +0000 (0:00:00.242) 0:08:41.702 ******* 2026-02-08 05:59:44.793345 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793356 | orchestrator | 2026-02-08 05:59:44.793367 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 05:59:44.793378 | orchestrator | Sunday 08 February 2026 05:59:44 +0000 (0:00:00.458) 0:08:42.160 ******* 2026-02-08 05:59:44.793389 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793399 | orchestrator | 2026-02-08 05:59:44.793410 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 05:59:44.793421 | orchestrator | Sunday 08 February 2026 05:59:44 +0000 (0:00:00.256) 0:08:42.417 ******* 2026-02-08 05:59:44.793431 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793442 | orchestrator | 2026-02-08 05:59:44.793453 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 05:59:44.793463 | orchestrator | Sunday 08 February 2026 05:59:44 +0000 (0:00:00.135) 0:08:42.553 ******* 2026-02-08 05:59:44.793474 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793495 | orchestrator | 2026-02-08 05:59:44.793515 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 05:59:44.793535 | orchestrator | Sunday 08 February 2026 05:59:44 +0000 (0:00:00.136) 0:08:42.689 ******* 2026-02-08 05:59:44.793552 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:44.793570 | orchestrator | 2026-02-08 05:59:44.793589 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 05:59:44.793639 | orchestrator | Sunday 08 February 2026 05:59:44 +0000 (0:00:00.145) 0:08:42.834 ******* 2026-02-08 05:59:53.067046 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.067160 | orchestrator | 2026-02-08 05:59:53.067178 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 05:59:53.067191 | orchestrator | Sunday 08 February 2026 05:59:44 +0000 (0:00:00.149) 0:08:42.984 ******* 2026-02-08 05:59:53.067203 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.067214 | orchestrator | 2026-02-08 05:59:53.067226 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 05:59:53.067237 | orchestrator | Sunday 08 February 2026 05:59:45 +0000 (0:00:00.160) 0:08:43.145 ******* 2026-02-08 05:59:53.067248 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.067259 | orchestrator | 2026-02-08 05:59:53.067270 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 05:59:53.067355 | orchestrator | Sunday 08 February 2026 05:59:45 +0000 (0:00:00.150) 0:08:43.296 ******* 2026-02-08 05:59:53.067404 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 05:59:53.067434 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 05:59:53.067446 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 05:59:53.067457 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.067468 | orchestrator | 2026-02-08 05:59:53.067479 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 05:59:53.067490 | orchestrator | Sunday 08 February 2026 05:59:45 +0000 (0:00:00.403) 0:08:43.699 ******* 2026-02-08 05:59:53.067501 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 05:59:53.067512 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 05:59:53.067523 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 05:59:53.067535 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.067548 | orchestrator | 2026-02-08 05:59:53.067562 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 05:59:53.067575 | orchestrator | Sunday 08 February 2026 05:59:46 +0000 (0:00:00.408) 0:08:44.107 ******* 2026-02-08 05:59:53.067588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 05:59:53.067601 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 05:59:53.067615 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 05:59:53.067699 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.067713 | orchestrator | 2026-02-08 05:59:53.067726 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 05:59:53.067741 | orchestrator | Sunday 08 February 2026 05:59:46 +0000 (0:00:00.413) 0:08:44.521 ******* 2026-02-08 05:59:53.067758 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.067777 | orchestrator | 2026-02-08 05:59:53.067807 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 05:59:53.067827 | orchestrator | Sunday 08 February 2026 05:59:46 +0000 (0:00:00.136) 0:08:44.657 ******* 2026-02-08 05:59:53.067846 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-08 05:59:53.067864 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.067883 | orchestrator | 2026-02-08 05:59:53.067900 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 05:59:53.067918 | orchestrator | Sunday 08 February 2026 05:59:47 +0000 (0:00:00.637) 0:08:45.295 ******* 2026-02-08 05:59:53.067966 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.067987 | orchestrator | 2026-02-08 05:59:53.068005 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-08 05:59:53.068026 | orchestrator | Sunday 08 February 2026 05:59:47 +0000 (0:00:00.205) 0:08:45.501 ******* 2026-02-08 05:59:53.068045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 05:59:53.068064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 05:59:53.068083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 05:59:53.068101 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.068119 | orchestrator | 2026-02-08 05:59:53.068137 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-08 05:59:53.068157 | orchestrator | Sunday 08 February 2026 05:59:47 +0000 (0:00:00.416) 0:08:45.917 ******* 2026-02-08 05:59:53.068175 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.068194 | orchestrator | 2026-02-08 05:59:53.068211 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-08 05:59:53.068222 | orchestrator | Sunday 08 February 2026 05:59:48 +0000 (0:00:00.139) 0:08:46.057 ******* 2026-02-08 05:59:53.068233 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.068244 | orchestrator | 2026-02-08 05:59:53.068255 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-08 05:59:53.068266 | orchestrator | Sunday 08 February 2026 05:59:48 +0000 (0:00:00.146) 0:08:46.204 ******* 2026-02-08 05:59:53.068276 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.068287 | orchestrator | 2026-02-08 05:59:53.068298 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-08 05:59:53.068309 | orchestrator | Sunday 08 February 2026 05:59:48 +0000 (0:00:00.146) 0:08:46.350 ******* 2026-02-08 05:59:53.068320 | orchestrator | skipping: [testbed-node-0] 2026-02-08 05:59:53.068330 | orchestrator | 2026-02-08 05:59:53.068341 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-08 05:59:53.068352 | orchestrator | 2026-02-08 05:59:53.068363 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-08 05:59:53.068374 | orchestrator | Sunday 08 February 2026 05:59:48 +0000 (0:00:00.610) 0:08:46.961 ******* 2026-02-08 05:59:53.068385 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068396 | orchestrator | 2026-02-08 05:59:53.068406 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 05:59:53.068417 | orchestrator | Sunday 08 February 2026 05:59:49 +0000 (0:00:00.227) 0:08:47.188 ******* 2026-02-08 05:59:53.068428 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068438 | orchestrator | 2026-02-08 05:59:53.068449 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 05:59:53.068483 | orchestrator | Sunday 08 February 2026 05:59:49 +0000 (0:00:00.215) 0:08:47.403 ******* 2026-02-08 05:59:53.068494 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068505 | orchestrator | 2026-02-08 05:59:53.068516 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 05:59:53.068527 | orchestrator | Sunday 08 February 2026 05:59:49 +0000 (0:00:00.457) 0:08:47.861 ******* 2026-02-08 05:59:53.068538 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068549 | orchestrator | 2026-02-08 05:59:53.068560 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 05:59:53.068570 | orchestrator | Sunday 08 February 2026 05:59:49 +0000 (0:00:00.143) 0:08:48.005 ******* 2026-02-08 05:59:53.068581 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068592 | orchestrator | 2026-02-08 05:59:53.068603 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 05:59:53.068613 | orchestrator | Sunday 08 February 2026 05:59:50 +0000 (0:00:00.152) 0:08:48.157 ******* 2026-02-08 05:59:53.068674 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068687 | orchestrator | 2026-02-08 05:59:53.068707 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 05:59:53.068730 | orchestrator | Sunday 08 February 2026 05:59:50 +0000 (0:00:00.150) 0:08:48.307 ******* 2026-02-08 05:59:53.068741 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068751 | orchestrator | 2026-02-08 05:59:53.068762 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 05:59:53.068773 | orchestrator | Sunday 08 February 2026 05:59:50 +0000 (0:00:00.139) 0:08:48.447 ******* 2026-02-08 05:59:53.068784 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068795 | orchestrator | 2026-02-08 05:59:53.068806 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 05:59:53.068817 | orchestrator | Sunday 08 February 2026 05:59:50 +0000 (0:00:00.133) 0:08:48.580 ******* 2026-02-08 05:59:53.068827 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068838 | orchestrator | 2026-02-08 05:59:53.068849 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 05:59:53.068860 | orchestrator | Sunday 08 February 2026 05:59:50 +0000 (0:00:00.131) 0:08:48.711 ******* 2026-02-08 05:59:53.068871 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068882 | orchestrator | 2026-02-08 05:59:53.068893 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 05:59:53.068903 | orchestrator | Sunday 08 February 2026 05:59:50 +0000 (0:00:00.142) 0:08:48.854 ******* 2026-02-08 05:59:53.068914 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068925 | orchestrator | 2026-02-08 05:59:53.068936 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 05:59:53.068947 | orchestrator | Sunday 08 February 2026 05:59:50 +0000 (0:00:00.153) 0:08:49.007 ******* 2026-02-08 05:59:53.068958 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.068969 | orchestrator | 2026-02-08 05:59:53.068980 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 05:59:53.068990 | orchestrator | Sunday 08 February 2026 05:59:51 +0000 (0:00:00.223) 0:08:49.230 ******* 2026-02-08 05:59:53.069001 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069012 | orchestrator | 2026-02-08 05:59:53.069023 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 05:59:53.069034 | orchestrator | Sunday 08 February 2026 05:59:51 +0000 (0:00:00.142) 0:08:49.373 ******* 2026-02-08 05:59:53.069044 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069055 | orchestrator | 2026-02-08 05:59:53.069066 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 05:59:53.069077 | orchestrator | Sunday 08 February 2026 05:59:51 +0000 (0:00:00.140) 0:08:49.513 ******* 2026-02-08 05:59:53.069088 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069098 | orchestrator | 2026-02-08 05:59:53.069109 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 05:59:53.069120 | orchestrator | Sunday 08 February 2026 05:59:51 +0000 (0:00:00.124) 0:08:49.638 ******* 2026-02-08 05:59:53.069131 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069141 | orchestrator | 2026-02-08 05:59:53.069152 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 05:59:53.069163 | orchestrator | Sunday 08 February 2026 05:59:52 +0000 (0:00:00.449) 0:08:50.088 ******* 2026-02-08 05:59:53.069174 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069185 | orchestrator | 2026-02-08 05:59:53.069196 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 05:59:53.069207 | orchestrator | Sunday 08 February 2026 05:59:52 +0000 (0:00:00.141) 0:08:50.229 ******* 2026-02-08 05:59:53.069217 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069228 | orchestrator | 2026-02-08 05:59:53.069239 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 05:59:53.069250 | orchestrator | Sunday 08 February 2026 05:59:52 +0000 (0:00:00.138) 0:08:50.368 ******* 2026-02-08 05:59:53.069261 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069272 | orchestrator | 2026-02-08 05:59:53.069283 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 05:59:53.069304 | orchestrator | Sunday 08 February 2026 05:59:52 +0000 (0:00:00.153) 0:08:50.521 ******* 2026-02-08 05:59:53.069315 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069326 | orchestrator | 2026-02-08 05:59:53.069337 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 05:59:53.069348 | orchestrator | Sunday 08 February 2026 05:59:52 +0000 (0:00:00.146) 0:08:50.667 ******* 2026-02-08 05:59:53.069358 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069369 | orchestrator | 2026-02-08 05:59:53.069380 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 05:59:53.069391 | orchestrator | Sunday 08 February 2026 05:59:52 +0000 (0:00:00.133) 0:08:50.801 ******* 2026-02-08 05:59:53.069401 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069412 | orchestrator | 2026-02-08 05:59:53.069423 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 05:59:53.069434 | orchestrator | Sunday 08 February 2026 05:59:52 +0000 (0:00:00.141) 0:08:50.943 ******* 2026-02-08 05:59:53.069445 | orchestrator | skipping: [testbed-node-1] 2026-02-08 05:59:53.069456 | orchestrator | 2026-02-08 05:59:53.069475 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:00:00.975117 | orchestrator | Sunday 08 February 2026 05:59:53 +0000 (0:00:00.164) 0:08:51.107 ******* 2026-02-08 06:00:00.975202 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975217 | orchestrator | 2026-02-08 06:00:00.975231 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:00:00.975243 | orchestrator | Sunday 08 February 2026 05:59:53 +0000 (0:00:00.213) 0:08:51.320 ******* 2026-02-08 06:00:00.975254 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975265 | orchestrator | 2026-02-08 06:00:00.975276 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:00:00.975287 | orchestrator | Sunday 08 February 2026 05:59:53 +0000 (0:00:00.132) 0:08:51.453 ******* 2026-02-08 06:00:00.975298 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975305 | orchestrator | 2026-02-08 06:00:00.975312 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:00:00.975336 | orchestrator | Sunday 08 February 2026 05:59:53 +0000 (0:00:00.139) 0:08:51.593 ******* 2026-02-08 06:00:00.975347 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975360 | orchestrator | 2026-02-08 06:00:00.975371 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:00:00.975382 | orchestrator | Sunday 08 February 2026 05:59:53 +0000 (0:00:00.161) 0:08:51.755 ******* 2026-02-08 06:00:00.975393 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975404 | orchestrator | 2026-02-08 06:00:00.975415 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:00:00.975425 | orchestrator | Sunday 08 February 2026 05:59:54 +0000 (0:00:00.464) 0:08:52.219 ******* 2026-02-08 06:00:00.975435 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975446 | orchestrator | 2026-02-08 06:00:00.975456 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:00:00.975466 | orchestrator | Sunday 08 February 2026 05:59:54 +0000 (0:00:00.152) 0:08:52.372 ******* 2026-02-08 06:00:00.975477 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975487 | orchestrator | 2026-02-08 06:00:00.975498 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:00:00.975511 | orchestrator | Sunday 08 February 2026 05:59:54 +0000 (0:00:00.131) 0:08:52.504 ******* 2026-02-08 06:00:00.975521 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975532 | orchestrator | 2026-02-08 06:00:00.975543 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:00:00.975555 | orchestrator | Sunday 08 February 2026 05:59:54 +0000 (0:00:00.135) 0:08:52.639 ******* 2026-02-08 06:00:00.975567 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975580 | orchestrator | 2026-02-08 06:00:00.975610 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:00:00.975617 | orchestrator | Sunday 08 February 2026 05:59:54 +0000 (0:00:00.216) 0:08:52.856 ******* 2026-02-08 06:00:00.975689 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975701 | orchestrator | 2026-02-08 06:00:00.975711 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:00:00.975722 | orchestrator | Sunday 08 February 2026 05:59:54 +0000 (0:00:00.138) 0:08:52.994 ******* 2026-02-08 06:00:00.975734 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975746 | orchestrator | 2026-02-08 06:00:00.975758 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:00:00.975766 | orchestrator | Sunday 08 February 2026 05:59:55 +0000 (0:00:00.151) 0:08:53.146 ******* 2026-02-08 06:00:00.975775 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975784 | orchestrator | 2026-02-08 06:00:00.975792 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:00:00.975801 | orchestrator | Sunday 08 February 2026 05:59:55 +0000 (0:00:00.164) 0:08:53.310 ******* 2026-02-08 06:00:00.975809 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975818 | orchestrator | 2026-02-08 06:00:00.975826 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:00:00.975836 | orchestrator | Sunday 08 February 2026 05:59:55 +0000 (0:00:00.142) 0:08:53.453 ******* 2026-02-08 06:00:00.975845 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975853 | orchestrator | 2026-02-08 06:00:00.975861 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:00:00.975871 | orchestrator | Sunday 08 February 2026 05:59:55 +0000 (0:00:00.148) 0:08:53.602 ******* 2026-02-08 06:00:00.975880 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975889 | orchestrator | 2026-02-08 06:00:00.975897 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:00:00.975907 | orchestrator | Sunday 08 February 2026 05:59:55 +0000 (0:00:00.126) 0:08:53.728 ******* 2026-02-08 06:00:00.975915 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975924 | orchestrator | 2026-02-08 06:00:00.975933 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:00:00.975943 | orchestrator | Sunday 08 February 2026 05:59:55 +0000 (0:00:00.129) 0:08:53.858 ******* 2026-02-08 06:00:00.975952 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975960 | orchestrator | 2026-02-08 06:00:00.975969 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:00:00.975979 | orchestrator | Sunday 08 February 2026 05:59:56 +0000 (0:00:00.441) 0:08:54.299 ******* 2026-02-08 06:00:00.975988 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.975997 | orchestrator | 2026-02-08 06:00:00.976006 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:00:00.976016 | orchestrator | Sunday 08 February 2026 05:59:56 +0000 (0:00:00.133) 0:08:54.432 ******* 2026-02-08 06:00:00.976025 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976033 | orchestrator | 2026-02-08 06:00:00.976040 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:00:00.976047 | orchestrator | Sunday 08 February 2026 05:59:56 +0000 (0:00:00.153) 0:08:54.586 ******* 2026-02-08 06:00:00.976055 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976062 | orchestrator | 2026-02-08 06:00:00.976087 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:00:00.976095 | orchestrator | Sunday 08 February 2026 05:59:56 +0000 (0:00:00.162) 0:08:54.749 ******* 2026-02-08 06:00:00.976102 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976109 | orchestrator | 2026-02-08 06:00:00.976117 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:00:00.976124 | orchestrator | Sunday 08 February 2026 05:59:56 +0000 (0:00:00.155) 0:08:54.904 ******* 2026-02-08 06:00:00.976139 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976147 | orchestrator | 2026-02-08 06:00:00.976154 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:00:00.976161 | orchestrator | Sunday 08 February 2026 05:59:57 +0000 (0:00:00.176) 0:08:55.080 ******* 2026-02-08 06:00:00.976169 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976176 | orchestrator | 2026-02-08 06:00:00.976183 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:00:00.976197 | orchestrator | Sunday 08 February 2026 05:59:57 +0000 (0:00:00.242) 0:08:55.323 ******* 2026-02-08 06:00:00.976205 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976212 | orchestrator | 2026-02-08 06:00:00.976219 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:00:00.976226 | orchestrator | Sunday 08 February 2026 05:59:57 +0000 (0:00:00.152) 0:08:55.476 ******* 2026-02-08 06:00:00.976234 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976241 | orchestrator | 2026-02-08 06:00:00.976248 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:00:00.976255 | orchestrator | Sunday 08 February 2026 05:59:57 +0000 (0:00:00.256) 0:08:55.732 ******* 2026-02-08 06:00:00.976262 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976270 | orchestrator | 2026-02-08 06:00:00.976277 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:00:00.976284 | orchestrator | Sunday 08 February 2026 05:59:57 +0000 (0:00:00.156) 0:08:55.888 ******* 2026-02-08 06:00:00.976291 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976299 | orchestrator | 2026-02-08 06:00:00.976306 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:00:00.976314 | orchestrator | Sunday 08 February 2026 05:59:57 +0000 (0:00:00.139) 0:08:56.027 ******* 2026-02-08 06:00:00.976322 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976329 | orchestrator | 2026-02-08 06:00:00.976336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:00:00.976343 | orchestrator | Sunday 08 February 2026 05:59:58 +0000 (0:00:00.142) 0:08:56.170 ******* 2026-02-08 06:00:00.976351 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976358 | orchestrator | 2026-02-08 06:00:00.976365 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:00:00.976372 | orchestrator | Sunday 08 February 2026 05:59:58 +0000 (0:00:00.135) 0:08:56.305 ******* 2026-02-08 06:00:00.976380 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976387 | orchestrator | 2026-02-08 06:00:00.976394 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:00:00.976401 | orchestrator | Sunday 08 February 2026 05:59:58 +0000 (0:00:00.134) 0:08:56.440 ******* 2026-02-08 06:00:00.976409 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976416 | orchestrator | 2026-02-08 06:00:00.976423 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:00:00.976430 | orchestrator | Sunday 08 February 2026 05:59:58 +0000 (0:00:00.445) 0:08:56.886 ******* 2026-02-08 06:00:00.976438 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 06:00:00.976445 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 06:00:00.976453 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 06:00:00.976460 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976467 | orchestrator | 2026-02-08 06:00:00.976475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:00:00.976482 | orchestrator | Sunday 08 February 2026 05:59:59 +0000 (0:00:00.440) 0:08:57.326 ******* 2026-02-08 06:00:00.976489 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 06:00:00.976496 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 06:00:00.976504 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 06:00:00.976515 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976523 | orchestrator | 2026-02-08 06:00:00.976530 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:00:00.976537 | orchestrator | Sunday 08 February 2026 05:59:59 +0000 (0:00:00.506) 0:08:57.833 ******* 2026-02-08 06:00:00.976555 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 06:00:00.976562 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 06:00:00.976570 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 06:00:00.976577 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976584 | orchestrator | 2026-02-08 06:00:00.976591 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:00:00.976599 | orchestrator | Sunday 08 February 2026 06:00:00 +0000 (0:00:00.425) 0:08:58.258 ******* 2026-02-08 06:00:00.976606 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976613 | orchestrator | 2026-02-08 06:00:00.976620 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:00:00.976652 | orchestrator | Sunday 08 February 2026 06:00:00 +0000 (0:00:00.178) 0:08:58.437 ******* 2026-02-08 06:00:00.976660 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-08 06:00:00.976667 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976674 | orchestrator | 2026-02-08 06:00:00.976682 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:00:00.976689 | orchestrator | Sunday 08 February 2026 06:00:00 +0000 (0:00:00.344) 0:08:58.781 ******* 2026-02-08 06:00:00.976696 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:00.976704 | orchestrator | 2026-02-08 06:00:00.976716 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-08 06:00:09.316213 | orchestrator | Sunday 08 February 2026 06:00:00 +0000 (0:00:00.234) 0:08:59.016 ******* 2026-02-08 06:00:09.316322 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 06:00:09.316338 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 06:00:09.316350 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 06:00:09.316362 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:09.316374 | orchestrator | 2026-02-08 06:00:09.316387 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-08 06:00:09.316399 | orchestrator | Sunday 08 February 2026 06:00:01 +0000 (0:00:00.458) 0:08:59.474 ******* 2026-02-08 06:00:09.316410 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:09.316422 | orchestrator | 2026-02-08 06:00:09.316433 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-08 06:00:09.316461 | orchestrator | Sunday 08 February 2026 06:00:01 +0000 (0:00:00.141) 0:08:59.616 ******* 2026-02-08 06:00:09.316473 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:09.316484 | orchestrator | 2026-02-08 06:00:09.316495 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-08 06:00:09.316506 | orchestrator | Sunday 08 February 2026 06:00:01 +0000 (0:00:00.125) 0:08:59.741 ******* 2026-02-08 06:00:09.316517 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:09.316528 | orchestrator | 2026-02-08 06:00:09.316539 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-08 06:00:09.316550 | orchestrator | Sunday 08 February 2026 06:00:01 +0000 (0:00:00.146) 0:08:59.888 ******* 2026-02-08 06:00:09.316561 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:00:09.316572 | orchestrator | 2026-02-08 06:00:09.316583 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-08 06:00:09.316595 | orchestrator | 2026-02-08 06:00:09.316606 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-08 06:00:09.316617 | orchestrator | Sunday 08 February 2026 06:00:02 +0000 (0:00:00.909) 0:09:00.797 ******* 2026-02-08 06:00:09.316693 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.316706 | orchestrator | 2026-02-08 06:00:09.316717 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:00:09.316753 | orchestrator | Sunday 08 February 2026 06:00:02 +0000 (0:00:00.198) 0:09:00.996 ******* 2026-02-08 06:00:09.316768 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.316782 | orchestrator | 2026-02-08 06:00:09.316795 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:00:09.316808 | orchestrator | Sunday 08 February 2026 06:00:03 +0000 (0:00:00.218) 0:09:01.214 ******* 2026-02-08 06:00:09.316821 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.316834 | orchestrator | 2026-02-08 06:00:09.316848 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:00:09.316862 | orchestrator | Sunday 08 February 2026 06:00:03 +0000 (0:00:00.150) 0:09:01.365 ******* 2026-02-08 06:00:09.316875 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.316889 | orchestrator | 2026-02-08 06:00:09.316903 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:00:09.316918 | orchestrator | Sunday 08 February 2026 06:00:03 +0000 (0:00:00.131) 0:09:01.497 ******* 2026-02-08 06:00:09.316931 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.316945 | orchestrator | 2026-02-08 06:00:09.316958 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:00:09.316971 | orchestrator | Sunday 08 February 2026 06:00:03 +0000 (0:00:00.179) 0:09:01.676 ******* 2026-02-08 06:00:09.316986 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.316999 | orchestrator | 2026-02-08 06:00:09.317013 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:00:09.317024 | orchestrator | Sunday 08 February 2026 06:00:03 +0000 (0:00:00.142) 0:09:01.818 ******* 2026-02-08 06:00:09.317035 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317046 | orchestrator | 2026-02-08 06:00:09.317057 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:00:09.317068 | orchestrator | Sunday 08 February 2026 06:00:03 +0000 (0:00:00.127) 0:09:01.945 ******* 2026-02-08 06:00:09.317079 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317090 | orchestrator | 2026-02-08 06:00:09.317101 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:00:09.317112 | orchestrator | Sunday 08 February 2026 06:00:04 +0000 (0:00:00.141) 0:09:02.087 ******* 2026-02-08 06:00:09.317123 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317134 | orchestrator | 2026-02-08 06:00:09.317145 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:00:09.317155 | orchestrator | Sunday 08 February 2026 06:00:04 +0000 (0:00:00.138) 0:09:02.226 ******* 2026-02-08 06:00:09.317166 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317177 | orchestrator | 2026-02-08 06:00:09.317188 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:00:09.317199 | orchestrator | Sunday 08 February 2026 06:00:04 +0000 (0:00:00.439) 0:09:02.666 ******* 2026-02-08 06:00:09.317210 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317221 | orchestrator | 2026-02-08 06:00:09.317232 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:00:09.317243 | orchestrator | Sunday 08 February 2026 06:00:04 +0000 (0:00:00.143) 0:09:02.809 ******* 2026-02-08 06:00:09.317254 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317265 | orchestrator | 2026-02-08 06:00:09.317276 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:00:09.317287 | orchestrator | Sunday 08 February 2026 06:00:05 +0000 (0:00:00.243) 0:09:03.053 ******* 2026-02-08 06:00:09.317298 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317309 | orchestrator | 2026-02-08 06:00:09.317320 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:00:09.317331 | orchestrator | Sunday 08 February 2026 06:00:05 +0000 (0:00:00.162) 0:09:03.215 ******* 2026-02-08 06:00:09.317342 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317353 | orchestrator | 2026-02-08 06:00:09.317376 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:00:09.317407 | orchestrator | Sunday 08 February 2026 06:00:05 +0000 (0:00:00.160) 0:09:03.375 ******* 2026-02-08 06:00:09.317419 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317430 | orchestrator | 2026-02-08 06:00:09.317441 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:00:09.317452 | orchestrator | Sunday 08 February 2026 06:00:05 +0000 (0:00:00.199) 0:09:03.575 ******* 2026-02-08 06:00:09.317463 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317475 | orchestrator | 2026-02-08 06:00:09.317486 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:00:09.317496 | orchestrator | Sunday 08 February 2026 06:00:05 +0000 (0:00:00.154) 0:09:03.729 ******* 2026-02-08 06:00:09.317507 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317518 | orchestrator | 2026-02-08 06:00:09.317529 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:00:09.317546 | orchestrator | Sunday 08 February 2026 06:00:05 +0000 (0:00:00.131) 0:09:03.861 ******* 2026-02-08 06:00:09.317557 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317569 | orchestrator | 2026-02-08 06:00:09.317580 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:00:09.317591 | orchestrator | Sunday 08 February 2026 06:00:05 +0000 (0:00:00.136) 0:09:03.998 ******* 2026-02-08 06:00:09.317602 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317613 | orchestrator | 2026-02-08 06:00:09.317645 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:00:09.317660 | orchestrator | Sunday 08 February 2026 06:00:06 +0000 (0:00:00.144) 0:09:04.142 ******* 2026-02-08 06:00:09.317672 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317691 | orchestrator | 2026-02-08 06:00:09.317704 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:00:09.317716 | orchestrator | Sunday 08 February 2026 06:00:06 +0000 (0:00:00.144) 0:09:04.287 ******* 2026-02-08 06:00:09.317726 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317737 | orchestrator | 2026-02-08 06:00:09.317748 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:00:09.317759 | orchestrator | Sunday 08 February 2026 06:00:06 +0000 (0:00:00.229) 0:09:04.517 ******* 2026-02-08 06:00:09.317770 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317780 | orchestrator | 2026-02-08 06:00:09.317791 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:00:09.317802 | orchestrator | Sunday 08 February 2026 06:00:06 +0000 (0:00:00.473) 0:09:04.991 ******* 2026-02-08 06:00:09.317813 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317824 | orchestrator | 2026-02-08 06:00:09.317835 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:00:09.317846 | orchestrator | Sunday 08 February 2026 06:00:07 +0000 (0:00:00.149) 0:09:05.140 ******* 2026-02-08 06:00:09.317856 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317867 | orchestrator | 2026-02-08 06:00:09.317878 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:00:09.317889 | orchestrator | Sunday 08 February 2026 06:00:07 +0000 (0:00:00.236) 0:09:05.376 ******* 2026-02-08 06:00:09.317900 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317911 | orchestrator | 2026-02-08 06:00:09.317921 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:00:09.317932 | orchestrator | Sunday 08 February 2026 06:00:07 +0000 (0:00:00.152) 0:09:05.529 ******* 2026-02-08 06:00:09.317943 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.317954 | orchestrator | 2026-02-08 06:00:09.317965 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:00:09.317976 | orchestrator | Sunday 08 February 2026 06:00:07 +0000 (0:00:00.131) 0:09:05.660 ******* 2026-02-08 06:00:09.317987 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.318006 | orchestrator | 2026-02-08 06:00:09.318077 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:00:09.318089 | orchestrator | Sunday 08 February 2026 06:00:07 +0000 (0:00:00.191) 0:09:05.852 ******* 2026-02-08 06:00:09.318100 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.318111 | orchestrator | 2026-02-08 06:00:09.318122 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:00:09.318133 | orchestrator | Sunday 08 February 2026 06:00:07 +0000 (0:00:00.132) 0:09:05.985 ******* 2026-02-08 06:00:09.318144 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.318155 | orchestrator | 2026-02-08 06:00:09.318172 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:00:09.318189 | orchestrator | Sunday 08 February 2026 06:00:08 +0000 (0:00:00.151) 0:09:06.136 ******* 2026-02-08 06:00:09.318208 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.318226 | orchestrator | 2026-02-08 06:00:09.318243 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:00:09.318255 | orchestrator | Sunday 08 February 2026 06:00:08 +0000 (0:00:00.154) 0:09:06.291 ******* 2026-02-08 06:00:09.318266 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.318276 | orchestrator | 2026-02-08 06:00:09.318287 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:00:09.318298 | orchestrator | Sunday 08 February 2026 06:00:08 +0000 (0:00:00.143) 0:09:06.434 ******* 2026-02-08 06:00:09.318312 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.318332 | orchestrator | 2026-02-08 06:00:09.318350 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:00:09.318362 | orchestrator | Sunday 08 February 2026 06:00:08 +0000 (0:00:00.210) 0:09:06.645 ******* 2026-02-08 06:00:09.318373 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.318384 | orchestrator | 2026-02-08 06:00:09.318395 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:00:09.318406 | orchestrator | Sunday 08 February 2026 06:00:08 +0000 (0:00:00.143) 0:09:06.788 ******* 2026-02-08 06:00:09.318417 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:09.318428 | orchestrator | 2026-02-08 06:00:09.318439 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:00:09.318461 | orchestrator | Sunday 08 February 2026 06:00:09 +0000 (0:00:00.568) 0:09:07.356 ******* 2026-02-08 06:00:32.360594 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.360766 | orchestrator | 2026-02-08 06:00:32.360786 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:00:32.360799 | orchestrator | Sunday 08 February 2026 06:00:09 +0000 (0:00:00.148) 0:09:07.505 ******* 2026-02-08 06:00:32.360811 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.360823 | orchestrator | 2026-02-08 06:00:32.360835 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:00:32.360847 | orchestrator | Sunday 08 February 2026 06:00:09 +0000 (0:00:00.157) 0:09:07.663 ******* 2026-02-08 06:00:32.360858 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.360869 | orchestrator | 2026-02-08 06:00:32.360880 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:00:32.360908 | orchestrator | Sunday 08 February 2026 06:00:09 +0000 (0:00:00.145) 0:09:07.809 ******* 2026-02-08 06:00:32.360920 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.360931 | orchestrator | 2026-02-08 06:00:32.360942 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:00:32.360953 | orchestrator | Sunday 08 February 2026 06:00:09 +0000 (0:00:00.138) 0:09:07.947 ******* 2026-02-08 06:00:32.360964 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.360976 | orchestrator | 2026-02-08 06:00:32.360987 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:00:32.361000 | orchestrator | Sunday 08 February 2026 06:00:10 +0000 (0:00:00.160) 0:09:08.108 ******* 2026-02-08 06:00:32.361033 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361045 | orchestrator | 2026-02-08 06:00:32.361057 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:00:32.361068 | orchestrator | Sunday 08 February 2026 06:00:10 +0000 (0:00:00.139) 0:09:08.248 ******* 2026-02-08 06:00:32.361079 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361090 | orchestrator | 2026-02-08 06:00:32.361101 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:00:32.361115 | orchestrator | Sunday 08 February 2026 06:00:10 +0000 (0:00:00.147) 0:09:08.396 ******* 2026-02-08 06:00:32.361129 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361143 | orchestrator | 2026-02-08 06:00:32.361157 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:00:32.361169 | orchestrator | Sunday 08 February 2026 06:00:10 +0000 (0:00:00.158) 0:09:08.554 ******* 2026-02-08 06:00:32.361182 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361195 | orchestrator | 2026-02-08 06:00:32.361208 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:00:32.361223 | orchestrator | Sunday 08 February 2026 06:00:10 +0000 (0:00:00.138) 0:09:08.692 ******* 2026-02-08 06:00:32.361236 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361251 | orchestrator | 2026-02-08 06:00:32.361263 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:00:32.361276 | orchestrator | Sunday 08 February 2026 06:00:10 +0000 (0:00:00.133) 0:09:08.826 ******* 2026-02-08 06:00:32.361290 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361303 | orchestrator | 2026-02-08 06:00:32.361317 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:00:32.361330 | orchestrator | Sunday 08 February 2026 06:00:10 +0000 (0:00:00.179) 0:09:09.006 ******* 2026-02-08 06:00:32.361343 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361356 | orchestrator | 2026-02-08 06:00:32.361369 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:00:32.361382 | orchestrator | Sunday 08 February 2026 06:00:11 +0000 (0:00:00.241) 0:09:09.247 ******* 2026-02-08 06:00:32.361396 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361409 | orchestrator | 2026-02-08 06:00:32.361422 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:00:32.361436 | orchestrator | Sunday 08 February 2026 06:00:11 +0000 (0:00:00.483) 0:09:09.730 ******* 2026-02-08 06:00:32.361449 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361462 | orchestrator | 2026-02-08 06:00:32.361475 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:00:32.361487 | orchestrator | Sunday 08 February 2026 06:00:11 +0000 (0:00:00.250) 0:09:09.981 ******* 2026-02-08 06:00:32.361498 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361510 | orchestrator | 2026-02-08 06:00:32.361520 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:00:32.361531 | orchestrator | Sunday 08 February 2026 06:00:12 +0000 (0:00:00.141) 0:09:10.123 ******* 2026-02-08 06:00:32.361542 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361553 | orchestrator | 2026-02-08 06:00:32.361565 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:00:32.361577 | orchestrator | Sunday 08 February 2026 06:00:12 +0000 (0:00:00.144) 0:09:10.267 ******* 2026-02-08 06:00:32.361589 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361600 | orchestrator | 2026-02-08 06:00:32.361611 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:00:32.361622 | orchestrator | Sunday 08 February 2026 06:00:12 +0000 (0:00:00.132) 0:09:10.400 ******* 2026-02-08 06:00:32.361704 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361720 | orchestrator | 2026-02-08 06:00:32.361732 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:00:32.361753 | orchestrator | Sunday 08 February 2026 06:00:12 +0000 (0:00:00.147) 0:09:10.547 ******* 2026-02-08 06:00:32.361764 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361776 | orchestrator | 2026-02-08 06:00:32.361787 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:00:32.361798 | orchestrator | Sunday 08 February 2026 06:00:12 +0000 (0:00:00.153) 0:09:10.701 ******* 2026-02-08 06:00:32.361809 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361820 | orchestrator | 2026-02-08 06:00:32.361832 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:00:32.361862 | orchestrator | Sunday 08 February 2026 06:00:12 +0000 (0:00:00.130) 0:09:10.831 ******* 2026-02-08 06:00:32.361874 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 06:00:32.361886 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 06:00:32.361897 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 06:00:32.361908 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.361919 | orchestrator | 2026-02-08 06:00:32.361930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:00:32.361941 | orchestrator | Sunday 08 February 2026 06:00:13 +0000 (0:00:00.398) 0:09:11.229 ******* 2026-02-08 06:00:32.361952 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 06:00:32.361963 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 06:00:32.361981 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 06:00:32.361992 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362003 | orchestrator | 2026-02-08 06:00:32.362014 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:00:32.362095 | orchestrator | Sunday 08 February 2026 06:00:13 +0000 (0:00:00.406) 0:09:11.636 ******* 2026-02-08 06:00:32.362111 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 06:00:32.362128 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 06:00:32.362139 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 06:00:32.362148 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362158 | orchestrator | 2026-02-08 06:00:32.362168 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:00:32.362178 | orchestrator | Sunday 08 February 2026 06:00:14 +0000 (0:00:00.419) 0:09:12.056 ******* 2026-02-08 06:00:32.362187 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362197 | orchestrator | 2026-02-08 06:00:32.362206 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:00:32.362216 | orchestrator | Sunday 08 February 2026 06:00:14 +0000 (0:00:00.153) 0:09:12.209 ******* 2026-02-08 06:00:32.362226 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-08 06:00:32.362236 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362246 | orchestrator | 2026-02-08 06:00:32.362255 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:00:32.362265 | orchestrator | Sunday 08 February 2026 06:00:14 +0000 (0:00:00.647) 0:09:12.857 ******* 2026-02-08 06:00:32.362275 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362284 | orchestrator | 2026-02-08 06:00:32.362294 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-08 06:00:32.362303 | orchestrator | Sunday 08 February 2026 06:00:15 +0000 (0:00:00.211) 0:09:13.069 ******* 2026-02-08 06:00:32.362313 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 06:00:32.362323 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 06:00:32.362332 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 06:00:32.362342 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362352 | orchestrator | 2026-02-08 06:00:32.362361 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-08 06:00:32.362379 | orchestrator | Sunday 08 February 2026 06:00:15 +0000 (0:00:00.470) 0:09:13.539 ******* 2026-02-08 06:00:32.362389 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362399 | orchestrator | 2026-02-08 06:00:32.362408 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-08 06:00:32.362418 | orchestrator | Sunday 08 February 2026 06:00:15 +0000 (0:00:00.153) 0:09:13.693 ******* 2026-02-08 06:00:32.362428 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362437 | orchestrator | 2026-02-08 06:00:32.362447 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-08 06:00:32.362457 | orchestrator | Sunday 08 February 2026 06:00:15 +0000 (0:00:00.149) 0:09:13.843 ******* 2026-02-08 06:00:32.362466 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362476 | orchestrator | 2026-02-08 06:00:32.362485 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-08 06:00:32.362495 | orchestrator | Sunday 08 February 2026 06:00:15 +0000 (0:00:00.140) 0:09:13.983 ******* 2026-02-08 06:00:32.362505 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:00:32.362514 | orchestrator | 2026-02-08 06:00:32.362524 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-08 06:00:32.362534 | orchestrator | 2026-02-08 06:00:32.362543 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-08 06:00:32.362553 | orchestrator | Sunday 08 February 2026 06:00:16 +0000 (0:00:00.614) 0:09:14.598 ******* 2026-02-08 06:00:32.362563 | orchestrator | changed: [testbed-node-0] 2026-02-08 06:00:32.362573 | orchestrator | 2026-02-08 06:00:32.362582 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-08 06:00:32.362592 | orchestrator | Sunday 08 February 2026 06:00:29 +0000 (0:00:12.978) 0:09:27.577 ******* 2026-02-08 06:00:32.362601 | orchestrator | changed: [testbed-node-0] 2026-02-08 06:00:32.362611 | orchestrator | 2026-02-08 06:00:32.362621 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:00:32.362630 | orchestrator | Sunday 08 February 2026 06:00:31 +0000 (0:00:01.630) 0:09:29.207 ******* 2026-02-08 06:00:32.362663 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-08 06:00:32.362674 | orchestrator | 2026-02-08 06:00:32.362684 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:00:32.362693 | orchestrator | Sunday 08 February 2026 06:00:31 +0000 (0:00:00.554) 0:09:29.762 ******* 2026-02-08 06:00:32.362703 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:32.362713 | orchestrator | 2026-02-08 06:00:32.362723 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:00:32.362733 | orchestrator | Sunday 08 February 2026 06:00:32 +0000 (0:00:00.497) 0:09:30.259 ******* 2026-02-08 06:00:32.362742 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:32.362752 | orchestrator | 2026-02-08 06:00:32.362770 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:00:41.522809 | orchestrator | Sunday 08 February 2026 06:00:32 +0000 (0:00:00.137) 0:09:30.397 ******* 2026-02-08 06:00:41.522933 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:41.522958 | orchestrator | 2026-02-08 06:00:41.522978 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:00:41.522996 | orchestrator | Sunday 08 February 2026 06:00:32 +0000 (0:00:00.559) 0:09:30.957 ******* 2026-02-08 06:00:41.523013 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:41.523029 | orchestrator | 2026-02-08 06:00:41.523040 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:00:41.523050 | orchestrator | Sunday 08 February 2026 06:00:33 +0000 (0:00:00.163) 0:09:31.120 ******* 2026-02-08 06:00:41.523060 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:41.523070 | orchestrator | 2026-02-08 06:00:41.523096 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:00:41.523106 | orchestrator | Sunday 08 February 2026 06:00:33 +0000 (0:00:00.167) 0:09:31.287 ******* 2026-02-08 06:00:41.523116 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:41.523147 | orchestrator | 2026-02-08 06:00:41.523157 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:00:41.523167 | orchestrator | Sunday 08 February 2026 06:00:33 +0000 (0:00:00.188) 0:09:31.476 ******* 2026-02-08 06:00:41.523177 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:41.523189 | orchestrator | 2026-02-08 06:00:41.523198 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:00:41.523208 | orchestrator | Sunday 08 February 2026 06:00:33 +0000 (0:00:00.162) 0:09:31.639 ******* 2026-02-08 06:00:41.523218 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:41.523227 | orchestrator | 2026-02-08 06:00:41.523237 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:00:41.523247 | orchestrator | Sunday 08 February 2026 06:00:33 +0000 (0:00:00.146) 0:09:31.785 ******* 2026-02-08 06:00:41.523257 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:00:41.523267 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:00:41.523276 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:00:41.523286 | orchestrator | 2026-02-08 06:00:41.523296 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:00:41.523306 | orchestrator | Sunday 08 February 2026 06:00:34 +0000 (0:00:01.008) 0:09:32.794 ******* 2026-02-08 06:00:41.523315 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:41.523325 | orchestrator | 2026-02-08 06:00:41.523335 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:00:41.523344 | orchestrator | Sunday 08 February 2026 06:00:35 +0000 (0:00:00.255) 0:09:33.049 ******* 2026-02-08 06:00:41.523354 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:00:41.523363 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:00:41.523373 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:00:41.523383 | orchestrator | 2026-02-08 06:00:41.523392 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:00:41.523402 | orchestrator | Sunday 08 February 2026 06:00:37 +0000 (0:00:02.301) 0:09:35.351 ******* 2026-02-08 06:00:41.523411 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 06:00:41.523421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 06:00:41.523431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 06:00:41.523440 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:41.523450 | orchestrator | 2026-02-08 06:00:41.523459 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:00:41.523490 | orchestrator | Sunday 08 February 2026 06:00:38 +0000 (0:00:00.774) 0:09:36.125 ******* 2026-02-08 06:00:41.523501 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:00:41.523514 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:00:41.523524 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:00:41.523534 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:41.523544 | orchestrator | 2026-02-08 06:00:41.523553 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:00:41.523563 | orchestrator | Sunday 08 February 2026 06:00:39 +0000 (0:00:01.041) 0:09:37.167 ******* 2026-02-08 06:00:41.523584 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:41.523615 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:41.523631 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:41.523676 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:41.523687 | orchestrator | 2026-02-08 06:00:41.523697 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:00:41.523707 | orchestrator | Sunday 08 February 2026 06:00:39 +0000 (0:00:00.485) 0:09:37.652 ******* 2026-02-08 06:00:41.523719 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:00:35.590218', 'end': '2026-02-08 06:00:35.642071', 'delta': '0:00:00.051853', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:00:41.523732 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:00:36.505372', 'end': '2026-02-08 06:00:36.573626', 'delta': '0:00:00.068254', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:00:41.523743 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:00:37.085982', 'end': '2026-02-08 06:00:37.138442', 'delta': '0:00:00.052460', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:00:41.523753 | orchestrator | 2026-02-08 06:00:41.523763 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:00:41.523780 | orchestrator | Sunday 08 February 2026 06:00:39 +0000 (0:00:00.225) 0:09:37.878 ******* 2026-02-08 06:00:41.523789 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:41.523799 | orchestrator | 2026-02-08 06:00:41.523809 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:00:41.523818 | orchestrator | Sunday 08 February 2026 06:00:40 +0000 (0:00:00.304) 0:09:38.182 ******* 2026-02-08 06:00:41.523828 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:41.523838 | orchestrator | 2026-02-08 06:00:41.523847 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:00:41.523857 | orchestrator | Sunday 08 February 2026 06:00:40 +0000 (0:00:00.255) 0:09:38.437 ******* 2026-02-08 06:00:41.523942 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:41.523952 | orchestrator | 2026-02-08 06:00:41.523962 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:00:41.523972 | orchestrator | Sunday 08 February 2026 06:00:40 +0000 (0:00:00.142) 0:09:38.579 ******* 2026-02-08 06:00:41.523981 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:41.523991 | orchestrator | 2026-02-08 06:00:41.524001 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:00:41.524019 | orchestrator | Sunday 08 February 2026 06:00:41 +0000 (0:00:00.983) 0:09:39.563 ******* 2026-02-08 06:00:43.620566 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:43.620705 | orchestrator | 2026-02-08 06:00:43.620724 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:00:43.620736 | orchestrator | Sunday 08 February 2026 06:00:41 +0000 (0:00:00.156) 0:09:39.720 ******* 2026-02-08 06:00:43.620748 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.620766 | orchestrator | 2026-02-08 06:00:43.620784 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:00:43.620800 | orchestrator | Sunday 08 February 2026 06:00:41 +0000 (0:00:00.135) 0:09:39.855 ******* 2026-02-08 06:00:43.620817 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.620834 | orchestrator | 2026-02-08 06:00:43.620872 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:00:43.620884 | orchestrator | Sunday 08 February 2026 06:00:42 +0000 (0:00:00.248) 0:09:40.104 ******* 2026-02-08 06:00:43.620894 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.620904 | orchestrator | 2026-02-08 06:00:43.620915 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:00:43.620931 | orchestrator | Sunday 08 February 2026 06:00:42 +0000 (0:00:00.125) 0:09:40.229 ******* 2026-02-08 06:00:43.620948 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.620965 | orchestrator | 2026-02-08 06:00:43.620981 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:00:43.620998 | orchestrator | Sunday 08 February 2026 06:00:42 +0000 (0:00:00.126) 0:09:40.356 ******* 2026-02-08 06:00:43.621014 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.621031 | orchestrator | 2026-02-08 06:00:43.621041 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:00:43.621056 | orchestrator | Sunday 08 February 2026 06:00:42 +0000 (0:00:00.152) 0:09:40.508 ******* 2026-02-08 06:00:43.621074 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.621091 | orchestrator | 2026-02-08 06:00:43.621110 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:00:43.621128 | orchestrator | Sunday 08 February 2026 06:00:42 +0000 (0:00:00.141) 0:09:40.650 ******* 2026-02-08 06:00:43.621146 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.621164 | orchestrator | 2026-02-08 06:00:43.621181 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:00:43.621198 | orchestrator | Sunday 08 February 2026 06:00:43 +0000 (0:00:00.447) 0:09:41.098 ******* 2026-02-08 06:00:43.621210 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.621226 | orchestrator | 2026-02-08 06:00:43.621243 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:00:43.621287 | orchestrator | Sunday 08 February 2026 06:00:43 +0000 (0:00:00.149) 0:09:41.247 ******* 2026-02-08 06:00:43.621305 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.621320 | orchestrator | 2026-02-08 06:00:43.621332 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:00:43.621344 | orchestrator | Sunday 08 February 2026 06:00:43 +0000 (0:00:00.139) 0:09:41.387 ******* 2026-02-08 06:00:43.621359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:00:43.621375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:00:43.621388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:00:43.621403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:00:43.621438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:00:43.621457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:00:43.621470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:00:43.621484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e566a5b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:00:43.621505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:00:43.621515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:00:43.621525 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.621535 | orchestrator | 2026-02-08 06:00:43.621554 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:00:43.874098 | orchestrator | Sunday 08 February 2026 06:00:43 +0000 (0:00:00.264) 0:09:41.652 ******* 2026-02-08 06:00:43.874223 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874244 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874278 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874292 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874304 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874316 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874353 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874369 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e566a5b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874391 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874403 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:00:43.874415 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:43.874429 | orchestrator | 2026-02-08 06:00:43.874449 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:00:55.745141 | orchestrator | Sunday 08 February 2026 06:00:43 +0000 (0:00:00.257) 0:09:41.909 ******* 2026-02-08 06:00:55.745285 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.745313 | orchestrator | 2026-02-08 06:00:55.745334 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:00:55.745352 | orchestrator | Sunday 08 February 2026 06:00:44 +0000 (0:00:00.514) 0:09:42.424 ******* 2026-02-08 06:00:55.745371 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.745389 | orchestrator | 2026-02-08 06:00:55.745430 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:00:55.745476 | orchestrator | Sunday 08 February 2026 06:00:44 +0000 (0:00:00.133) 0:09:42.558 ******* 2026-02-08 06:00:55.745488 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.745499 | orchestrator | 2026-02-08 06:00:55.745510 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:00:55.745521 | orchestrator | Sunday 08 February 2026 06:00:45 +0000 (0:00:00.499) 0:09:43.057 ******* 2026-02-08 06:00:55.745532 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.745545 | orchestrator | 2026-02-08 06:00:55.745556 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:00:55.745567 | orchestrator | Sunday 08 February 2026 06:00:45 +0000 (0:00:00.132) 0:09:43.189 ******* 2026-02-08 06:00:55.745578 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.745589 | orchestrator | 2026-02-08 06:00:55.745600 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:00:55.745610 | orchestrator | Sunday 08 February 2026 06:00:45 +0000 (0:00:00.264) 0:09:43.454 ******* 2026-02-08 06:00:55.745621 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.745632 | orchestrator | 2026-02-08 06:00:55.745673 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:00:55.745693 | orchestrator | Sunday 08 February 2026 06:00:45 +0000 (0:00:00.163) 0:09:43.617 ******* 2026-02-08 06:00:55.745711 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:00:55.745731 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-08 06:00:55.745749 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-08 06:00:55.745766 | orchestrator | 2026-02-08 06:00:55.745787 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:00:55.745805 | orchestrator | Sunday 08 February 2026 06:00:46 +0000 (0:00:01.041) 0:09:44.659 ******* 2026-02-08 06:00:55.745824 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 06:00:55.745836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 06:00:55.745847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 06:00:55.745858 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.745869 | orchestrator | 2026-02-08 06:00:55.745879 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:00:55.745890 | orchestrator | Sunday 08 February 2026 06:00:46 +0000 (0:00:00.169) 0:09:44.829 ******* 2026-02-08 06:00:55.745901 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.745912 | orchestrator | 2026-02-08 06:00:55.745923 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:00:55.745934 | orchestrator | Sunday 08 February 2026 06:00:47 +0000 (0:00:00.458) 0:09:45.287 ******* 2026-02-08 06:00:55.745945 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:00:55.745955 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:00:55.745967 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:00:55.745978 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:00:55.745990 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:00:55.746001 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:00:55.746011 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:00:55.746088 | orchestrator | 2026-02-08 06:00:55.746100 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:00:55.746110 | orchestrator | Sunday 08 February 2026 06:00:48 +0000 (0:00:00.857) 0:09:46.145 ******* 2026-02-08 06:00:55.746121 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:00:55.746132 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:00:55.746143 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:00:55.746166 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:00:55.746177 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:00:55.746188 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:00:55.746199 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:00:55.746210 | orchestrator | 2026-02-08 06:00:55.746221 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:00:55.746230 | orchestrator | Sunday 08 February 2026 06:00:49 +0000 (0:00:01.701) 0:09:47.846 ******* 2026-02-08 06:00:55.746240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-08 06:00:55.746250 | orchestrator | 2026-02-08 06:00:55.746260 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:00:55.746270 | orchestrator | Sunday 08 February 2026 06:00:49 +0000 (0:00:00.200) 0:09:48.046 ******* 2026-02-08 06:00:55.746280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-08 06:00:55.746289 | orchestrator | 2026-02-08 06:00:55.746320 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:00:55.746330 | orchestrator | Sunday 08 February 2026 06:00:50 +0000 (0:00:00.291) 0:09:48.337 ******* 2026-02-08 06:00:55.746340 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.746350 | orchestrator | 2026-02-08 06:00:55.746360 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:00:55.746369 | orchestrator | Sunday 08 February 2026 06:00:50 +0000 (0:00:00.529) 0:09:48.867 ******* 2026-02-08 06:00:55.746386 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746396 | orchestrator | 2026-02-08 06:00:55.746406 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:00:55.746416 | orchestrator | Sunday 08 February 2026 06:00:50 +0000 (0:00:00.141) 0:09:49.009 ******* 2026-02-08 06:00:55.746426 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746435 | orchestrator | 2026-02-08 06:00:55.746445 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:00:55.746455 | orchestrator | Sunday 08 February 2026 06:00:51 +0000 (0:00:00.141) 0:09:49.151 ******* 2026-02-08 06:00:55.746464 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746474 | orchestrator | 2026-02-08 06:00:55.746484 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:00:55.746493 | orchestrator | Sunday 08 February 2026 06:00:51 +0000 (0:00:00.167) 0:09:49.319 ******* 2026-02-08 06:00:55.746503 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.746513 | orchestrator | 2026-02-08 06:00:55.746522 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:00:55.746532 | orchestrator | Sunday 08 February 2026 06:00:51 +0000 (0:00:00.573) 0:09:49.892 ******* 2026-02-08 06:00:55.746541 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746551 | orchestrator | 2026-02-08 06:00:55.746561 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:00:55.746570 | orchestrator | Sunday 08 February 2026 06:00:52 +0000 (0:00:00.424) 0:09:50.317 ******* 2026-02-08 06:00:55.746580 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746590 | orchestrator | 2026-02-08 06:00:55.746600 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:00:55.746609 | orchestrator | Sunday 08 February 2026 06:00:52 +0000 (0:00:00.153) 0:09:50.470 ******* 2026-02-08 06:00:55.746619 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.746629 | orchestrator | 2026-02-08 06:00:55.746638 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:00:55.746692 | orchestrator | Sunday 08 February 2026 06:00:53 +0000 (0:00:00.583) 0:09:51.054 ******* 2026-02-08 06:00:55.746709 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.746719 | orchestrator | 2026-02-08 06:00:55.746728 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:00:55.746738 | orchestrator | Sunday 08 February 2026 06:00:53 +0000 (0:00:00.600) 0:09:51.654 ******* 2026-02-08 06:00:55.746748 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746758 | orchestrator | 2026-02-08 06:00:55.746767 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:00:55.746777 | orchestrator | Sunday 08 February 2026 06:00:53 +0000 (0:00:00.139) 0:09:51.794 ******* 2026-02-08 06:00:55.746786 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.746796 | orchestrator | 2026-02-08 06:00:55.746806 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:00:55.746816 | orchestrator | Sunday 08 February 2026 06:00:53 +0000 (0:00:00.166) 0:09:51.961 ******* 2026-02-08 06:00:55.746825 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746835 | orchestrator | 2026-02-08 06:00:55.746844 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:00:55.746854 | orchestrator | Sunday 08 February 2026 06:00:54 +0000 (0:00:00.139) 0:09:52.101 ******* 2026-02-08 06:00:55.746864 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746873 | orchestrator | 2026-02-08 06:00:55.746883 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:00:55.746892 | orchestrator | Sunday 08 February 2026 06:00:54 +0000 (0:00:00.147) 0:09:52.248 ******* 2026-02-08 06:00:55.746902 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746912 | orchestrator | 2026-02-08 06:00:55.746922 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:00:55.746931 | orchestrator | Sunday 08 February 2026 06:00:54 +0000 (0:00:00.178) 0:09:52.427 ******* 2026-02-08 06:00:55.746941 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746951 | orchestrator | 2026-02-08 06:00:55.746960 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:00:55.746970 | orchestrator | Sunday 08 February 2026 06:00:54 +0000 (0:00:00.125) 0:09:52.552 ******* 2026-02-08 06:00:55.746980 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.746989 | orchestrator | 2026-02-08 06:00:55.746999 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:00:55.747009 | orchestrator | Sunday 08 February 2026 06:00:54 +0000 (0:00:00.150) 0:09:52.703 ******* 2026-02-08 06:00:55.747018 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.747028 | orchestrator | 2026-02-08 06:00:55.747038 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:00:55.747047 | orchestrator | Sunday 08 February 2026 06:00:54 +0000 (0:00:00.161) 0:09:52.864 ******* 2026-02-08 06:00:55.747057 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.747067 | orchestrator | 2026-02-08 06:00:55.747076 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:00:55.747086 | orchestrator | Sunday 08 February 2026 06:00:54 +0000 (0:00:00.167) 0:09:53.032 ******* 2026-02-08 06:00:55.747096 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:00:55.747105 | orchestrator | 2026-02-08 06:00:55.747115 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:00:55.747125 | orchestrator | Sunday 08 February 2026 06:00:55 +0000 (0:00:00.563) 0:09:53.596 ******* 2026-02-08 06:00:55.747134 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:00:55.747144 | orchestrator | 2026-02-08 06:00:55.747154 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:00:55.747170 | orchestrator | Sunday 08 February 2026 06:00:55 +0000 (0:00:00.184) 0:09:53.781 ******* 2026-02-08 06:01:07.997618 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.997769 | orchestrator | 2026-02-08 06:01:07.997789 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:01:07.997803 | orchestrator | Sunday 08 February 2026 06:00:55 +0000 (0:00:00.136) 0:09:53.917 ******* 2026-02-08 06:01:07.997838 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.997850 | orchestrator | 2026-02-08 06:01:07.997861 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:01:07.997887 | orchestrator | Sunday 08 February 2026 06:00:56 +0000 (0:00:00.145) 0:09:54.063 ******* 2026-02-08 06:01:07.997898 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.997909 | orchestrator | 2026-02-08 06:01:07.997920 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:01:07.997932 | orchestrator | Sunday 08 February 2026 06:00:56 +0000 (0:00:00.134) 0:09:54.198 ******* 2026-02-08 06:01:07.997943 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.997954 | orchestrator | 2026-02-08 06:01:07.997983 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:01:07.998006 | orchestrator | Sunday 08 February 2026 06:00:56 +0000 (0:00:00.154) 0:09:54.352 ******* 2026-02-08 06:01:07.998068 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998083 | orchestrator | 2026-02-08 06:01:07.998094 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:01:07.998104 | orchestrator | Sunday 08 February 2026 06:00:56 +0000 (0:00:00.143) 0:09:54.496 ******* 2026-02-08 06:01:07.998188 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998204 | orchestrator | 2026-02-08 06:01:07.998247 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:01:07.998262 | orchestrator | Sunday 08 February 2026 06:00:56 +0000 (0:00:00.133) 0:09:54.630 ******* 2026-02-08 06:01:07.998276 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998289 | orchestrator | 2026-02-08 06:01:07.998300 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:01:07.998311 | orchestrator | Sunday 08 February 2026 06:00:56 +0000 (0:00:00.146) 0:09:54.777 ******* 2026-02-08 06:01:07.998322 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998333 | orchestrator | 2026-02-08 06:01:07.998344 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:01:07.998355 | orchestrator | Sunday 08 February 2026 06:00:56 +0000 (0:00:00.141) 0:09:54.918 ******* 2026-02-08 06:01:07.998366 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998377 | orchestrator | 2026-02-08 06:01:07.998388 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:01:07.998399 | orchestrator | Sunday 08 February 2026 06:00:57 +0000 (0:00:00.157) 0:09:55.075 ******* 2026-02-08 06:01:07.998410 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998421 | orchestrator | 2026-02-08 06:01:07.998432 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:01:07.998443 | orchestrator | Sunday 08 February 2026 06:00:57 +0000 (0:00:00.141) 0:09:55.217 ******* 2026-02-08 06:01:07.998454 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998464 | orchestrator | 2026-02-08 06:01:07.998475 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:01:07.998487 | orchestrator | Sunday 08 February 2026 06:00:57 +0000 (0:00:00.516) 0:09:55.734 ******* 2026-02-08 06:01:07.998498 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:01:07.998510 | orchestrator | 2026-02-08 06:01:07.998521 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:01:07.998532 | orchestrator | Sunday 08 February 2026 06:00:58 +0000 (0:00:00.970) 0:09:56.704 ******* 2026-02-08 06:01:07.998543 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:01:07.998554 | orchestrator | 2026-02-08 06:01:07.998565 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:01:07.998576 | orchestrator | Sunday 08 February 2026 06:01:00 +0000 (0:00:01.549) 0:09:58.254 ******* 2026-02-08 06:01:07.998587 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-08 06:01:07.998599 | orchestrator | 2026-02-08 06:01:07.998610 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:01:07.998621 | orchestrator | Sunday 08 February 2026 06:01:00 +0000 (0:00:00.207) 0:09:58.461 ******* 2026-02-08 06:01:07.998643 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998693 | orchestrator | 2026-02-08 06:01:07.998705 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:01:07.998716 | orchestrator | Sunday 08 February 2026 06:01:00 +0000 (0:00:00.166) 0:09:58.628 ******* 2026-02-08 06:01:07.998726 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998737 | orchestrator | 2026-02-08 06:01:07.998748 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:01:07.998759 | orchestrator | Sunday 08 February 2026 06:01:00 +0000 (0:00:00.132) 0:09:58.761 ******* 2026-02-08 06:01:07.998770 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:01:07.998781 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:01:07.998792 | orchestrator | 2026-02-08 06:01:07.998803 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:01:07.998814 | orchestrator | Sunday 08 February 2026 06:01:01 +0000 (0:00:00.850) 0:09:59.611 ******* 2026-02-08 06:01:07.998828 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:01:07.998846 | orchestrator | 2026-02-08 06:01:07.998863 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:01:07.998880 | orchestrator | Sunday 08 February 2026 06:01:02 +0000 (0:00:00.495) 0:10:00.107 ******* 2026-02-08 06:01:07.998899 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998918 | orchestrator | 2026-02-08 06:01:07.998934 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:01:07.998951 | orchestrator | Sunday 08 February 2026 06:01:02 +0000 (0:00:00.161) 0:10:00.268 ******* 2026-02-08 06:01:07.998970 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.998989 | orchestrator | 2026-02-08 06:01:07.999037 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:01:07.999056 | orchestrator | Sunday 08 February 2026 06:01:02 +0000 (0:00:00.138) 0:10:00.407 ******* 2026-02-08 06:01:07.999075 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999089 | orchestrator | 2026-02-08 06:01:07.999107 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:01:07.999125 | orchestrator | Sunday 08 February 2026 06:01:02 +0000 (0:00:00.138) 0:10:00.546 ******* 2026-02-08 06:01:07.999154 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-08 06:01:07.999174 | orchestrator | 2026-02-08 06:01:07.999193 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:01:07.999209 | orchestrator | Sunday 08 February 2026 06:01:03 +0000 (0:00:00.518) 0:10:01.064 ******* 2026-02-08 06:01:07.999220 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:01:07.999231 | orchestrator | 2026-02-08 06:01:07.999241 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:01:07.999252 | orchestrator | Sunday 08 February 2026 06:01:03 +0000 (0:00:00.736) 0:10:01.801 ******* 2026-02-08 06:01:07.999262 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:01:07.999273 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:01:07.999284 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:01:07.999295 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999306 | orchestrator | 2026-02-08 06:01:07.999317 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:01:07.999327 | orchestrator | Sunday 08 February 2026 06:01:03 +0000 (0:00:00.176) 0:10:01.978 ******* 2026-02-08 06:01:07.999338 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999349 | orchestrator | 2026-02-08 06:01:07.999359 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:01:07.999370 | orchestrator | Sunday 08 February 2026 06:01:04 +0000 (0:00:00.144) 0:10:02.123 ******* 2026-02-08 06:01:07.999394 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999405 | orchestrator | 2026-02-08 06:01:07.999416 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:01:07.999426 | orchestrator | Sunday 08 February 2026 06:01:04 +0000 (0:00:00.183) 0:10:02.306 ******* 2026-02-08 06:01:07.999439 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999457 | orchestrator | 2026-02-08 06:01:07.999475 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:01:07.999492 | orchestrator | Sunday 08 February 2026 06:01:04 +0000 (0:00:00.170) 0:10:02.476 ******* 2026-02-08 06:01:07.999511 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999530 | orchestrator | 2026-02-08 06:01:07.999548 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:01:07.999563 | orchestrator | Sunday 08 February 2026 06:01:04 +0000 (0:00:00.153) 0:10:02.629 ******* 2026-02-08 06:01:07.999575 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999585 | orchestrator | 2026-02-08 06:01:07.999596 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:01:07.999607 | orchestrator | Sunday 08 February 2026 06:01:04 +0000 (0:00:00.169) 0:10:02.799 ******* 2026-02-08 06:01:07.999618 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:01:07.999629 | orchestrator | 2026-02-08 06:01:07.999640 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:01:07.999707 | orchestrator | Sunday 08 February 2026 06:01:06 +0000 (0:00:01.566) 0:10:04.366 ******* 2026-02-08 06:01:07.999720 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:01:07.999731 | orchestrator | 2026-02-08 06:01:07.999742 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:01:07.999752 | orchestrator | Sunday 08 February 2026 06:01:06 +0000 (0:00:00.143) 0:10:04.509 ******* 2026-02-08 06:01:07.999763 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-08 06:01:07.999774 | orchestrator | 2026-02-08 06:01:07.999785 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:01:07.999796 | orchestrator | Sunday 08 February 2026 06:01:06 +0000 (0:00:00.249) 0:10:04.758 ******* 2026-02-08 06:01:07.999806 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999817 | orchestrator | 2026-02-08 06:01:07.999828 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:01:07.999839 | orchestrator | Sunday 08 February 2026 06:01:06 +0000 (0:00:00.153) 0:10:04.912 ******* 2026-02-08 06:01:07.999850 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999861 | orchestrator | 2026-02-08 06:01:07.999872 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:01:07.999882 | orchestrator | Sunday 08 February 2026 06:01:07 +0000 (0:00:00.472) 0:10:05.385 ******* 2026-02-08 06:01:07.999893 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999904 | orchestrator | 2026-02-08 06:01:07.999915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:01:07.999925 | orchestrator | Sunday 08 February 2026 06:01:07 +0000 (0:00:00.164) 0:10:05.550 ******* 2026-02-08 06:01:07.999936 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999947 | orchestrator | 2026-02-08 06:01:07.999958 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:01:07.999968 | orchestrator | Sunday 08 February 2026 06:01:07 +0000 (0:00:00.171) 0:10:05.721 ******* 2026-02-08 06:01:07.999979 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:07.999990 | orchestrator | 2026-02-08 06:01:08.000001 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:01:08.000012 | orchestrator | Sunday 08 February 2026 06:01:07 +0000 (0:00:00.150) 0:10:05.872 ******* 2026-02-08 06:01:08.000023 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:08.000034 | orchestrator | 2026-02-08 06:01:08.000045 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:01:08.000074 | orchestrator | Sunday 08 February 2026 06:01:07 +0000 (0:00:00.163) 0:10:06.035 ******* 2026-02-08 06:01:22.023474 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.023569 | orchestrator | 2026-02-08 06:01:22.023581 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:01:22.023591 | orchestrator | Sunday 08 February 2026 06:01:08 +0000 (0:00:00.169) 0:10:06.205 ******* 2026-02-08 06:01:22.023600 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.023608 | orchestrator | 2026-02-08 06:01:22.023616 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:01:22.023638 | orchestrator | Sunday 08 February 2026 06:01:08 +0000 (0:00:00.183) 0:10:06.389 ******* 2026-02-08 06:01:22.023646 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:01:22.023678 | orchestrator | 2026-02-08 06:01:22.023686 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:01:22.023695 | orchestrator | Sunday 08 February 2026 06:01:08 +0000 (0:00:00.233) 0:10:06.622 ******* 2026-02-08 06:01:22.023703 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-08 06:01:22.023712 | orchestrator | 2026-02-08 06:01:22.023720 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:01:22.023728 | orchestrator | Sunday 08 February 2026 06:01:08 +0000 (0:00:00.202) 0:10:06.825 ******* 2026-02-08 06:01:22.023736 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-08 06:01:22.023745 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-08 06:01:22.023753 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-08 06:01:22.023761 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-08 06:01:22.023769 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-08 06:01:22.023777 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-08 06:01:22.023785 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-08 06:01:22.023794 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:01:22.023802 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:01:22.023810 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:01:22.023819 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:01:22.023827 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:01:22.023835 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:01:22.023843 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:01:22.023851 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-08 06:01:22.023859 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-08 06:01:22.023867 | orchestrator | 2026-02-08 06:01:22.023876 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:01:22.023884 | orchestrator | Sunday 08 February 2026 06:01:14 +0000 (0:00:05.978) 0:10:12.803 ******* 2026-02-08 06:01:22.023892 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.023900 | orchestrator | 2026-02-08 06:01:22.023908 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:01:22.023916 | orchestrator | Sunday 08 February 2026 06:01:14 +0000 (0:00:00.133) 0:10:12.937 ******* 2026-02-08 06:01:22.023924 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.023932 | orchestrator | 2026-02-08 06:01:22.023940 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:01:22.023948 | orchestrator | Sunday 08 February 2026 06:01:15 +0000 (0:00:00.479) 0:10:13.417 ******* 2026-02-08 06:01:22.023956 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.023964 | orchestrator | 2026-02-08 06:01:22.023972 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:01:22.023980 | orchestrator | Sunday 08 February 2026 06:01:15 +0000 (0:00:00.132) 0:10:13.549 ******* 2026-02-08 06:01:22.024007 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024016 | orchestrator | 2026-02-08 06:01:22.024024 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:01:22.024032 | orchestrator | Sunday 08 February 2026 06:01:15 +0000 (0:00:00.141) 0:10:13.690 ******* 2026-02-08 06:01:22.024042 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024052 | orchestrator | 2026-02-08 06:01:22.024061 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:01:22.024070 | orchestrator | Sunday 08 February 2026 06:01:15 +0000 (0:00:00.195) 0:10:13.886 ******* 2026-02-08 06:01:22.024080 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024089 | orchestrator | 2026-02-08 06:01:22.024099 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:01:22.024108 | orchestrator | Sunday 08 February 2026 06:01:16 +0000 (0:00:00.168) 0:10:14.055 ******* 2026-02-08 06:01:22.024118 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024128 | orchestrator | 2026-02-08 06:01:22.024137 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:01:22.024147 | orchestrator | Sunday 08 February 2026 06:01:16 +0000 (0:00:00.157) 0:10:14.212 ******* 2026-02-08 06:01:22.024156 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024165 | orchestrator | 2026-02-08 06:01:22.024175 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:01:22.024185 | orchestrator | Sunday 08 February 2026 06:01:16 +0000 (0:00:00.143) 0:10:14.356 ******* 2026-02-08 06:01:22.024194 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024204 | orchestrator | 2026-02-08 06:01:22.024214 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:01:22.024224 | orchestrator | Sunday 08 February 2026 06:01:16 +0000 (0:00:00.137) 0:10:14.494 ******* 2026-02-08 06:01:22.024234 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024243 | orchestrator | 2026-02-08 06:01:22.024253 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:01:22.024276 | orchestrator | Sunday 08 February 2026 06:01:16 +0000 (0:00:00.134) 0:10:14.629 ******* 2026-02-08 06:01:22.024287 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024296 | orchestrator | 2026-02-08 06:01:22.024307 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:01:22.024316 | orchestrator | Sunday 08 February 2026 06:01:16 +0000 (0:00:00.138) 0:10:14.767 ******* 2026-02-08 06:01:22.024326 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024335 | orchestrator | 2026-02-08 06:01:22.024345 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:01:22.024355 | orchestrator | Sunday 08 February 2026 06:01:16 +0000 (0:00:00.130) 0:10:14.897 ******* 2026-02-08 06:01:22.024365 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024375 | orchestrator | 2026-02-08 06:01:22.024385 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:01:22.024395 | orchestrator | Sunday 08 February 2026 06:01:17 +0000 (0:00:00.230) 0:10:15.128 ******* 2026-02-08 06:01:22.024402 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024411 | orchestrator | 2026-02-08 06:01:22.024418 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:01:22.024426 | orchestrator | Sunday 08 February 2026 06:01:17 +0000 (0:00:00.130) 0:10:15.259 ******* 2026-02-08 06:01:22.024434 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024442 | orchestrator | 2026-02-08 06:01:22.024450 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:01:22.024458 | orchestrator | Sunday 08 February 2026 06:01:18 +0000 (0:00:00.920) 0:10:16.179 ******* 2026-02-08 06:01:22.024466 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024474 | orchestrator | 2026-02-08 06:01:22.024481 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:01:22.024495 | orchestrator | Sunday 08 February 2026 06:01:18 +0000 (0:00:00.150) 0:10:16.330 ******* 2026-02-08 06:01:22.024503 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024511 | orchestrator | 2026-02-08 06:01:22.024519 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:01:22.024528 | orchestrator | Sunday 08 February 2026 06:01:18 +0000 (0:00:00.157) 0:10:16.487 ******* 2026-02-08 06:01:22.024536 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024544 | orchestrator | 2026-02-08 06:01:22.024552 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:01:22.024560 | orchestrator | Sunday 08 February 2026 06:01:18 +0000 (0:00:00.143) 0:10:16.631 ******* 2026-02-08 06:01:22.024568 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024576 | orchestrator | 2026-02-08 06:01:22.024584 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:01:22.024592 | orchestrator | Sunday 08 February 2026 06:01:18 +0000 (0:00:00.146) 0:10:16.777 ******* 2026-02-08 06:01:22.024600 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024607 | orchestrator | 2026-02-08 06:01:22.024615 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:01:22.024623 | orchestrator | Sunday 08 February 2026 06:01:18 +0000 (0:00:00.141) 0:10:16.919 ******* 2026-02-08 06:01:22.024631 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024642 | orchestrator | 2026-02-08 06:01:22.024704 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:01:22.024716 | orchestrator | Sunday 08 February 2026 06:01:19 +0000 (0:00:00.155) 0:10:17.075 ******* 2026-02-08 06:01:22.024724 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 06:01:22.024732 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 06:01:22.024740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 06:01:22.024748 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024756 | orchestrator | 2026-02-08 06:01:22.024764 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:01:22.024772 | orchestrator | Sunday 08 February 2026 06:01:19 +0000 (0:00:00.403) 0:10:17.478 ******* 2026-02-08 06:01:22.024780 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 06:01:22.024788 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 06:01:22.024796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 06:01:22.024804 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024812 | orchestrator | 2026-02-08 06:01:22.024820 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:01:22.024828 | orchestrator | Sunday 08 February 2026 06:01:19 +0000 (0:00:00.456) 0:10:17.935 ******* 2026-02-08 06:01:22.024836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 06:01:22.024877 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 06:01:22.024887 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-08 06:01:22.024895 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024903 | orchestrator | 2026-02-08 06:01:22.024910 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:01:22.024918 | orchestrator | Sunday 08 February 2026 06:01:20 +0000 (0:00:00.453) 0:10:18.388 ******* 2026-02-08 06:01:22.024926 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024934 | orchestrator | 2026-02-08 06:01:22.024942 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:01:22.024950 | orchestrator | Sunday 08 February 2026 06:01:20 +0000 (0:00:00.149) 0:10:18.538 ******* 2026-02-08 06:01:22.024958 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-08 06:01:22.024966 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:01:22.024974 | orchestrator | 2026-02-08 06:01:22.024982 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:01:22.024996 | orchestrator | Sunday 08 February 2026 06:01:20 +0000 (0:00:00.369) 0:10:18.907 ******* 2026-02-08 06:01:22.025004 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:01:22.025012 | orchestrator | 2026-02-08 06:01:22.025020 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-08 06:01:22.025034 | orchestrator | Sunday 08 February 2026 06:01:22 +0000 (0:00:01.149) 0:10:20.057 ******* 2026-02-08 06:02:03.624333 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:02:03.624422 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:02:03.624433 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:02:03.624441 | orchestrator | 2026-02-08 06:02:03.624461 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-08 06:02:03.624468 | orchestrator | Sunday 08 February 2026 06:01:22 +0000 (0:00:00.698) 0:10:20.755 ******* 2026-02-08 06:02:03.624475 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-08 06:02:03.624482 | orchestrator | 2026-02-08 06:02:03.624489 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-08 06:02:03.624495 | orchestrator | Sunday 08 February 2026 06:01:23 +0000 (0:00:00.594) 0:10:21.350 ******* 2026-02-08 06:02:03.624502 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:02:03.624509 | orchestrator | 2026-02-08 06:02:03.624515 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-08 06:02:03.624522 | orchestrator | Sunday 08 February 2026 06:01:23 +0000 (0:00:00.535) 0:10:21.886 ******* 2026-02-08 06:02:03.624528 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:02:03.624535 | orchestrator | 2026-02-08 06:02:03.624542 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-08 06:02:03.624548 | orchestrator | Sunday 08 February 2026 06:01:23 +0000 (0:00:00.144) 0:10:22.031 ******* 2026-02-08 06:02:03.624555 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-08 06:02:03.624561 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-08 06:02:03.624568 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-08 06:02:03.624574 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-08 06:02:03.624580 | orchestrator | 2026-02-08 06:02:03.624587 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-08 06:02:03.624593 | orchestrator | Sunday 08 February 2026 06:01:30 +0000 (0:00:06.413) 0:10:28.444 ******* 2026-02-08 06:02:03.624600 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:02:03.624606 | orchestrator | 2026-02-08 06:02:03.624612 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-08 06:02:03.624619 | orchestrator | Sunday 08 February 2026 06:01:30 +0000 (0:00:00.185) 0:10:28.630 ******* 2026-02-08 06:02:03.624625 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-08 06:02:03.624631 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 06:02:03.624638 | orchestrator | 2026-02-08 06:02:03.624644 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:02:03.624650 | orchestrator | Sunday 08 February 2026 06:01:32 +0000 (0:00:02.306) 0:10:30.937 ******* 2026-02-08 06:02:03.624656 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-08 06:02:03.624711 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-08 06:02:03.624720 | orchestrator | 2026-02-08 06:02:03.624726 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-08 06:02:03.624733 | orchestrator | Sunday 08 February 2026 06:01:33 +0000 (0:00:01.072) 0:10:32.009 ******* 2026-02-08 06:02:03.624739 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:02:03.624746 | orchestrator | 2026-02-08 06:02:03.624752 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-08 06:02:03.624758 | orchestrator | Sunday 08 February 2026 06:01:34 +0000 (0:00:00.552) 0:10:32.562 ******* 2026-02-08 06:02:03.624780 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:02:03.624787 | orchestrator | 2026-02-08 06:02:03.624794 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-08 06:02:03.624800 | orchestrator | Sunday 08 February 2026 06:01:34 +0000 (0:00:00.125) 0:10:32.687 ******* 2026-02-08 06:02:03.624806 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:02:03.624812 | orchestrator | 2026-02-08 06:02:03.624818 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-08 06:02:03.624825 | orchestrator | Sunday 08 February 2026 06:01:34 +0000 (0:00:00.135) 0:10:32.823 ******* 2026-02-08 06:02:03.624831 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-08 06:02:03.624837 | orchestrator | 2026-02-08 06:02:03.624843 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-08 06:02:03.624850 | orchestrator | Sunday 08 February 2026 06:01:35 +0000 (0:00:00.865) 0:10:33.688 ******* 2026-02-08 06:02:03.624856 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:02:03.624862 | orchestrator | 2026-02-08 06:02:03.624868 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-08 06:02:03.624874 | orchestrator | Sunday 08 February 2026 06:01:35 +0000 (0:00:00.160) 0:10:33.849 ******* 2026-02-08 06:02:03.624880 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:02:03.624887 | orchestrator | 2026-02-08 06:02:03.624893 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-08 06:02:03.624902 | orchestrator | Sunday 08 February 2026 06:01:35 +0000 (0:00:00.154) 0:10:34.003 ******* 2026-02-08 06:02:03.624910 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-08 06:02:03.624917 | orchestrator | 2026-02-08 06:02:03.624925 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-08 06:02:03.624933 | orchestrator | Sunday 08 February 2026 06:01:36 +0000 (0:00:00.565) 0:10:34.569 ******* 2026-02-08 06:02:03.624940 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:02:03.624947 | orchestrator | 2026-02-08 06:02:03.624954 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-08 06:02:03.624962 | orchestrator | Sunday 08 February 2026 06:01:37 +0000 (0:00:01.136) 0:10:35.705 ******* 2026-02-08 06:02:03.624969 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:02:03.624976 | orchestrator | 2026-02-08 06:02:03.624984 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-08 06:02:03.624992 | orchestrator | Sunday 08 February 2026 06:01:38 +0000 (0:00:01.010) 0:10:36.716 ******* 2026-02-08 06:02:03.625000 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:02:03.625007 | orchestrator | 2026-02-08 06:02:03.625027 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-08 06:02:03.625035 | orchestrator | Sunday 08 February 2026 06:01:40 +0000 (0:00:01.452) 0:10:38.168 ******* 2026-02-08 06:02:03.625043 | orchestrator | changed: [testbed-node-0] 2026-02-08 06:02:03.625050 | orchestrator | 2026-02-08 06:02:03.625058 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-08 06:02:03.625069 | orchestrator | Sunday 08 February 2026 06:01:43 +0000 (0:00:03.022) 0:10:41.191 ******* 2026-02-08 06:02:03.625075 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:02:03.625082 | orchestrator | 2026-02-08 06:02:03.625088 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-08 06:02:03.625094 | orchestrator | 2026-02-08 06:02:03.625100 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-08 06:02:03.625107 | orchestrator | Sunday 08 February 2026 06:01:43 +0000 (0:00:00.611) 0:10:41.802 ******* 2026-02-08 06:02:03.625113 | orchestrator | changed: [testbed-node-1] 2026-02-08 06:02:03.625119 | orchestrator | 2026-02-08 06:02:03.625126 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-08 06:02:03.625132 | orchestrator | Sunday 08 February 2026 06:01:55 +0000 (0:00:11.777) 0:10:53.580 ******* 2026-02-08 06:02:03.625138 | orchestrator | changed: [testbed-node-1] 2026-02-08 06:02:03.625150 | orchestrator | 2026-02-08 06:02:03.625156 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:02:03.625163 | orchestrator | Sunday 08 February 2026 06:01:57 +0000 (0:00:01.767) 0:10:55.347 ******* 2026-02-08 06:02:03.625169 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-08 06:02:03.625175 | orchestrator | 2026-02-08 06:02:03.625182 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:02:03.625188 | orchestrator | Sunday 08 February 2026 06:01:57 +0000 (0:00:00.253) 0:10:55.601 ******* 2026-02-08 06:02:03.625194 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:03.625201 | orchestrator | 2026-02-08 06:02:03.625207 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:02:03.625213 | orchestrator | Sunday 08 February 2026 06:01:58 +0000 (0:00:00.456) 0:10:56.057 ******* 2026-02-08 06:02:03.625219 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:03.625226 | orchestrator | 2026-02-08 06:02:03.625232 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:02:03.625238 | orchestrator | Sunday 08 February 2026 06:01:58 +0000 (0:00:00.149) 0:10:56.207 ******* 2026-02-08 06:02:03.625244 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:03.625251 | orchestrator | 2026-02-08 06:02:03.625257 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:02:03.625263 | orchestrator | Sunday 08 February 2026 06:01:58 +0000 (0:00:00.471) 0:10:56.679 ******* 2026-02-08 06:02:03.625269 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:03.625275 | orchestrator | 2026-02-08 06:02:03.625282 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:02:03.625288 | orchestrator | Sunday 08 February 2026 06:01:58 +0000 (0:00:00.156) 0:10:56.835 ******* 2026-02-08 06:02:03.625294 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:03.625300 | orchestrator | 2026-02-08 06:02:03.625307 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:02:03.625313 | orchestrator | Sunday 08 February 2026 06:01:58 +0000 (0:00:00.153) 0:10:56.988 ******* 2026-02-08 06:02:03.625319 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:03.625325 | orchestrator | 2026-02-08 06:02:03.625331 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:02:03.625338 | orchestrator | Sunday 08 February 2026 06:01:59 +0000 (0:00:00.171) 0:10:57.160 ******* 2026-02-08 06:02:03.625344 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:03.625350 | orchestrator | 2026-02-08 06:02:03.625357 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:02:03.625363 | orchestrator | Sunday 08 February 2026 06:01:59 +0000 (0:00:00.155) 0:10:57.315 ******* 2026-02-08 06:02:03.625369 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:03.625375 | orchestrator | 2026-02-08 06:02:03.625382 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:02:03.625388 | orchestrator | Sunday 08 February 2026 06:01:59 +0000 (0:00:00.152) 0:10:57.468 ******* 2026-02-08 06:02:03.625394 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:02:03.625400 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 06:02:03.625407 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:02:03.625413 | orchestrator | 2026-02-08 06:02:03.625419 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:02:03.625426 | orchestrator | Sunday 08 February 2026 06:02:00 +0000 (0:00:01.025) 0:10:58.494 ******* 2026-02-08 06:02:03.625432 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:03.625438 | orchestrator | 2026-02-08 06:02:03.625444 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:02:03.625451 | orchestrator | Sunday 08 February 2026 06:02:01 +0000 (0:00:00.907) 0:10:59.401 ******* 2026-02-08 06:02:03.625457 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:02:03.625468 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 06:02:03.625474 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:02:03.625480 | orchestrator | 2026-02-08 06:02:03.625487 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:02:03.625493 | orchestrator | Sunday 08 February 2026 06:02:03 +0000 (0:00:01.813) 0:11:01.215 ******* 2026-02-08 06:02:03.625499 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 06:02:03.625506 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 06:02:03.625512 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 06:02:03.625519 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:03.625525 | orchestrator | 2026-02-08 06:02:03.625535 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:02:09.517390 | orchestrator | Sunday 08 February 2026 06:02:03 +0000 (0:00:00.444) 0:11:01.659 ******* 2026-02-08 06:02:09.517516 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.517535 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.517548 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.517560 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.517572 | orchestrator | 2026-02-08 06:02:09.517585 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:02:09.517597 | orchestrator | Sunday 08 February 2026 06:02:04 +0000 (0:00:00.658) 0:11:02.318 ******* 2026-02-08 06:02:09.517610 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.517625 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.517637 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.517648 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.517660 | orchestrator | 2026-02-08 06:02:09.517695 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:02:09.517707 | orchestrator | Sunday 08 February 2026 06:02:04 +0000 (0:00:00.185) 0:11:02.503 ******* 2026-02-08 06:02:09.517720 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:02:01.878725', 'end': '2026-02-08 06:02:01.926328', 'delta': '0:00:00.047603', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:02:09.517759 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:02:02.429278', 'end': '2026-02-08 06:02:02.466487', 'delta': '0:00:00.037209', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:02:09.517805 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:02:02.966616', 'end': '2026-02-08 06:02:03.023812', 'delta': '0:00:00.057196', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:02:09.517833 | orchestrator | 2026-02-08 06:02:09.517861 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:02:09.517880 | orchestrator | Sunday 08 February 2026 06:02:04 +0000 (0:00:00.208) 0:11:02.712 ******* 2026-02-08 06:02:09.517899 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:09.517918 | orchestrator | 2026-02-08 06:02:09.517937 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:02:09.517954 | orchestrator | Sunday 08 February 2026 06:02:04 +0000 (0:00:00.287) 0:11:02.999 ******* 2026-02-08 06:02:09.517973 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.517994 | orchestrator | 2026-02-08 06:02:09.518084 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:02:09.518105 | orchestrator | Sunday 08 February 2026 06:02:05 +0000 (0:00:00.266) 0:11:03.265 ******* 2026-02-08 06:02:09.518119 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:09.518133 | orchestrator | 2026-02-08 06:02:09.518147 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:02:09.518160 | orchestrator | Sunday 08 February 2026 06:02:05 +0000 (0:00:00.166) 0:11:03.431 ******* 2026-02-08 06:02:09.518176 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:02:09.518195 | orchestrator | 2026-02-08 06:02:09.518213 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:02:09.518232 | orchestrator | Sunday 08 February 2026 06:02:07 +0000 (0:00:02.000) 0:11:05.432 ******* 2026-02-08 06:02:09.518251 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:09.518270 | orchestrator | 2026-02-08 06:02:09.518288 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:02:09.518306 | orchestrator | Sunday 08 February 2026 06:02:07 +0000 (0:00:00.156) 0:11:05.589 ******* 2026-02-08 06:02:09.518323 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.518344 | orchestrator | 2026-02-08 06:02:09.518361 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:02:09.518394 | orchestrator | Sunday 08 February 2026 06:02:07 +0000 (0:00:00.123) 0:11:05.713 ******* 2026-02-08 06:02:09.518406 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.518417 | orchestrator | 2026-02-08 06:02:09.518428 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:02:09.518439 | orchestrator | Sunday 08 February 2026 06:02:07 +0000 (0:00:00.227) 0:11:05.940 ******* 2026-02-08 06:02:09.518449 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.518460 | orchestrator | 2026-02-08 06:02:09.518472 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:02:09.518483 | orchestrator | Sunday 08 February 2026 06:02:08 +0000 (0:00:00.471) 0:11:06.412 ******* 2026-02-08 06:02:09.518494 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.518506 | orchestrator | 2026-02-08 06:02:09.518516 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:02:09.518527 | orchestrator | Sunday 08 February 2026 06:02:08 +0000 (0:00:00.154) 0:11:06.566 ******* 2026-02-08 06:02:09.518538 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.518549 | orchestrator | 2026-02-08 06:02:09.518560 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:02:09.518571 | orchestrator | Sunday 08 February 2026 06:02:08 +0000 (0:00:00.138) 0:11:06.704 ******* 2026-02-08 06:02:09.518581 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.518592 | orchestrator | 2026-02-08 06:02:09.518603 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:02:09.518614 | orchestrator | Sunday 08 February 2026 06:02:08 +0000 (0:00:00.146) 0:11:06.851 ******* 2026-02-08 06:02:09.518633 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.518652 | orchestrator | 2026-02-08 06:02:09.518700 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:02:09.518722 | orchestrator | Sunday 08 February 2026 06:02:08 +0000 (0:00:00.154) 0:11:07.006 ******* 2026-02-08 06:02:09.518734 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.518745 | orchestrator | 2026-02-08 06:02:09.518756 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:02:09.518772 | orchestrator | Sunday 08 February 2026 06:02:09 +0000 (0:00:00.158) 0:11:07.164 ******* 2026-02-08 06:02:09.518790 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.518809 | orchestrator | 2026-02-08 06:02:09.518828 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:02:09.518847 | orchestrator | Sunday 08 February 2026 06:02:09 +0000 (0:00:00.140) 0:11:07.305 ******* 2026-02-08 06:02:09.518867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:02:09.518903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:02:09.748779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:02:09.748907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:02:09.748926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:02:09.748939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:02:09.748951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:02:09.749002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bd3944a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:02:09.749035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:02:09.749048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:02:09.749060 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:09.749073 | orchestrator | 2026-02-08 06:02:09.749085 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:02:09.749097 | orchestrator | Sunday 08 February 2026 06:02:09 +0000 (0:00:00.254) 0:11:07.560 ******* 2026-02-08 06:02:09.749110 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.749123 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.749135 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.749153 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:09.749203 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:15.785795 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:15.785904 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:15.785942 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bd3944a6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd3944a6-94a2-4419-9995-37d6054a2669-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:15.785997 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:15.786012 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:02:15.786078 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:15.786092 | orchestrator | 2026-02-08 06:02:15.786106 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:02:15.786119 | orchestrator | Sunday 08 February 2026 06:02:09 +0000 (0:00:00.231) 0:11:07.792 ******* 2026-02-08 06:02:15.786130 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:15.786142 | orchestrator | 2026-02-08 06:02:15.786153 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:02:15.786164 | orchestrator | Sunday 08 February 2026 06:02:10 +0000 (0:00:00.531) 0:11:08.323 ******* 2026-02-08 06:02:15.786174 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:15.786185 | orchestrator | 2026-02-08 06:02:15.786196 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:02:15.786207 | orchestrator | Sunday 08 February 2026 06:02:10 +0000 (0:00:00.144) 0:11:08.468 ******* 2026-02-08 06:02:15.786218 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:15.786229 | orchestrator | 2026-02-08 06:02:15.786240 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:02:15.786251 | orchestrator | Sunday 08 February 2026 06:02:10 +0000 (0:00:00.468) 0:11:08.936 ******* 2026-02-08 06:02:15.786262 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:15.786273 | orchestrator | 2026-02-08 06:02:15.786283 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:02:15.786295 | orchestrator | Sunday 08 February 2026 06:02:11 +0000 (0:00:00.139) 0:11:09.075 ******* 2026-02-08 06:02:15.786305 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:15.786317 | orchestrator | 2026-02-08 06:02:15.786328 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:02:15.786339 | orchestrator | Sunday 08 February 2026 06:02:12 +0000 (0:00:00.978) 0:11:10.054 ******* 2026-02-08 06:02:15.786350 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:15.786360 | orchestrator | 2026-02-08 06:02:15.786371 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:02:15.786382 | orchestrator | Sunday 08 February 2026 06:02:12 +0000 (0:00:00.176) 0:11:10.231 ******* 2026-02-08 06:02:15.786393 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-08 06:02:15.786405 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 06:02:15.786416 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-08 06:02:15.786426 | orchestrator | 2026-02-08 06:02:15.786438 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:02:15.786449 | orchestrator | Sunday 08 February 2026 06:02:12 +0000 (0:00:00.712) 0:11:10.943 ******* 2026-02-08 06:02:15.786473 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-08 06:02:15.786484 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-08 06:02:15.786495 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-08 06:02:15.786506 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:15.786517 | orchestrator | 2026-02-08 06:02:15.786528 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:02:15.786539 | orchestrator | Sunday 08 February 2026 06:02:13 +0000 (0:00:00.187) 0:11:11.131 ******* 2026-02-08 06:02:15.786550 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:15.786561 | orchestrator | 2026-02-08 06:02:15.786572 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:02:15.786583 | orchestrator | Sunday 08 February 2026 06:02:13 +0000 (0:00:00.157) 0:11:11.289 ******* 2026-02-08 06:02:15.786594 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:02:15.786605 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 06:02:15.786616 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:02:15.786627 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:02:15.786644 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:02:15.786655 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:02:15.786666 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:02:15.786711 | orchestrator | 2026-02-08 06:02:15.786722 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:02:15.786733 | orchestrator | Sunday 08 February 2026 06:02:14 +0000 (0:00:00.812) 0:11:12.102 ******* 2026-02-08 06:02:15.786744 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:02:15.786755 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 06:02:15.786766 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:02:15.786785 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:02:26.232381 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:02:26.645289 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:02:26.645370 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:02:26.645385 | orchestrator | 2026-02-08 06:02:26.645399 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:02:26.645411 | orchestrator | Sunday 08 February 2026 06:02:15 +0000 (0:00:01.723) 0:11:13.825 ******* 2026-02-08 06:02:26.645422 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-08 06:02:26.645434 | orchestrator | 2026-02-08 06:02:26.645446 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:02:26.645457 | orchestrator | Sunday 08 February 2026 06:02:15 +0000 (0:00:00.210) 0:11:14.036 ******* 2026-02-08 06:02:26.645468 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-08 06:02:26.645479 | orchestrator | 2026-02-08 06:02:26.645490 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:02:26.645501 | orchestrator | Sunday 08 February 2026 06:02:16 +0000 (0:00:00.223) 0:11:14.259 ******* 2026-02-08 06:02:26.645512 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.645525 | orchestrator | 2026-02-08 06:02:26.645536 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:02:26.645547 | orchestrator | Sunday 08 February 2026 06:02:16 +0000 (0:00:00.529) 0:11:14.789 ******* 2026-02-08 06:02:26.645593 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.645606 | orchestrator | 2026-02-08 06:02:26.645617 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:02:26.645628 | orchestrator | Sunday 08 February 2026 06:02:17 +0000 (0:00:00.444) 0:11:15.233 ******* 2026-02-08 06:02:26.645639 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.645649 | orchestrator | 2026-02-08 06:02:26.645660 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:02:26.645698 | orchestrator | Sunday 08 February 2026 06:02:17 +0000 (0:00:00.133) 0:11:15.367 ******* 2026-02-08 06:02:26.645711 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.645723 | orchestrator | 2026-02-08 06:02:26.645734 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:02:26.645745 | orchestrator | Sunday 08 February 2026 06:02:17 +0000 (0:00:00.144) 0:11:15.512 ******* 2026-02-08 06:02:26.645755 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.645773 | orchestrator | 2026-02-08 06:02:26.645792 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:02:26.645810 | orchestrator | Sunday 08 February 2026 06:02:18 +0000 (0:00:00.547) 0:11:16.060 ******* 2026-02-08 06:02:26.645828 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.645844 | orchestrator | 2026-02-08 06:02:26.645863 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:02:26.645881 | orchestrator | Sunday 08 February 2026 06:02:18 +0000 (0:00:00.135) 0:11:16.196 ******* 2026-02-08 06:02:26.645898 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.645914 | orchestrator | 2026-02-08 06:02:26.645931 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:02:26.645948 | orchestrator | Sunday 08 February 2026 06:02:18 +0000 (0:00:00.140) 0:11:16.336 ******* 2026-02-08 06:02:26.645966 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.645983 | orchestrator | 2026-02-08 06:02:26.646000 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:02:26.646076 | orchestrator | Sunday 08 February 2026 06:02:18 +0000 (0:00:00.544) 0:11:16.881 ******* 2026-02-08 06:02:26.646101 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.646120 | orchestrator | 2026-02-08 06:02:26.646139 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:02:26.646160 | orchestrator | Sunday 08 February 2026 06:02:19 +0000 (0:00:00.589) 0:11:17.470 ******* 2026-02-08 06:02:26.646179 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646200 | orchestrator | 2026-02-08 06:02:26.646219 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:02:26.646238 | orchestrator | Sunday 08 February 2026 06:02:19 +0000 (0:00:00.175) 0:11:17.646 ******* 2026-02-08 06:02:26.646249 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.646260 | orchestrator | 2026-02-08 06:02:26.646271 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:02:26.646282 | orchestrator | Sunday 08 February 2026 06:02:19 +0000 (0:00:00.161) 0:11:17.808 ******* 2026-02-08 06:02:26.646295 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646313 | orchestrator | 2026-02-08 06:02:26.646331 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:02:26.646348 | orchestrator | Sunday 08 February 2026 06:02:19 +0000 (0:00:00.129) 0:11:17.937 ******* 2026-02-08 06:02:26.646387 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646406 | orchestrator | 2026-02-08 06:02:26.646424 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:02:26.646442 | orchestrator | Sunday 08 February 2026 06:02:20 +0000 (0:00:00.144) 0:11:18.082 ******* 2026-02-08 06:02:26.646461 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646479 | orchestrator | 2026-02-08 06:02:26.646498 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:02:26.646517 | orchestrator | Sunday 08 February 2026 06:02:20 +0000 (0:00:00.147) 0:11:18.229 ******* 2026-02-08 06:02:26.646545 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646556 | orchestrator | 2026-02-08 06:02:26.646567 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:02:26.646578 | orchestrator | Sunday 08 February 2026 06:02:20 +0000 (0:00:00.150) 0:11:18.380 ******* 2026-02-08 06:02:26.646588 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646599 | orchestrator | 2026-02-08 06:02:26.646610 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:02:26.646650 | orchestrator | Sunday 08 February 2026 06:02:20 +0000 (0:00:00.433) 0:11:18.813 ******* 2026-02-08 06:02:26.646661 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.646698 | orchestrator | 2026-02-08 06:02:26.646712 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:02:26.646723 | orchestrator | Sunday 08 February 2026 06:02:20 +0000 (0:00:00.175) 0:11:18.989 ******* 2026-02-08 06:02:26.646734 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.646745 | orchestrator | 2026-02-08 06:02:26.646755 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:02:26.646766 | orchestrator | Sunday 08 February 2026 06:02:21 +0000 (0:00:00.159) 0:11:19.148 ******* 2026-02-08 06:02:26.646776 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.646787 | orchestrator | 2026-02-08 06:02:26.646798 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:02:26.646808 | orchestrator | Sunday 08 February 2026 06:02:21 +0000 (0:00:00.217) 0:11:19.366 ******* 2026-02-08 06:02:26.646819 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646829 | orchestrator | 2026-02-08 06:02:26.646840 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:02:26.646851 | orchestrator | Sunday 08 February 2026 06:02:21 +0000 (0:00:00.156) 0:11:19.523 ******* 2026-02-08 06:02:26.646861 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646872 | orchestrator | 2026-02-08 06:02:26.646883 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:02:26.646894 | orchestrator | Sunday 08 February 2026 06:02:21 +0000 (0:00:00.138) 0:11:19.661 ******* 2026-02-08 06:02:26.646904 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646915 | orchestrator | 2026-02-08 06:02:26.646925 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:02:26.646936 | orchestrator | Sunday 08 February 2026 06:02:21 +0000 (0:00:00.136) 0:11:19.798 ******* 2026-02-08 06:02:26.646947 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.646957 | orchestrator | 2026-02-08 06:02:26.646968 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:02:26.646979 | orchestrator | Sunday 08 February 2026 06:02:21 +0000 (0:00:00.135) 0:11:19.933 ******* 2026-02-08 06:02:26.646989 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647000 | orchestrator | 2026-02-08 06:02:26.647010 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:02:26.647021 | orchestrator | Sunday 08 February 2026 06:02:22 +0000 (0:00:00.122) 0:11:20.056 ******* 2026-02-08 06:02:26.647032 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647043 | orchestrator | 2026-02-08 06:02:26.647053 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:02:26.647064 | orchestrator | Sunday 08 February 2026 06:02:22 +0000 (0:00:00.139) 0:11:20.195 ******* 2026-02-08 06:02:26.647075 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647085 | orchestrator | 2026-02-08 06:02:26.647096 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:02:26.647107 | orchestrator | Sunday 08 February 2026 06:02:22 +0000 (0:00:00.127) 0:11:20.323 ******* 2026-02-08 06:02:26.647118 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647129 | orchestrator | 2026-02-08 06:02:26.647139 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:02:26.647150 | orchestrator | Sunday 08 February 2026 06:02:22 +0000 (0:00:00.125) 0:11:20.449 ******* 2026-02-08 06:02:26.647176 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647187 | orchestrator | 2026-02-08 06:02:26.647197 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:02:26.647208 | orchestrator | Sunday 08 February 2026 06:02:22 +0000 (0:00:00.453) 0:11:20.902 ******* 2026-02-08 06:02:26.647224 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647242 | orchestrator | 2026-02-08 06:02:26.647261 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:02:26.647279 | orchestrator | Sunday 08 February 2026 06:02:23 +0000 (0:00:00.157) 0:11:21.060 ******* 2026-02-08 06:02:26.647298 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647313 | orchestrator | 2026-02-08 06:02:26.647331 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:02:26.647349 | orchestrator | Sunday 08 February 2026 06:02:23 +0000 (0:00:00.165) 0:11:21.225 ******* 2026-02-08 06:02:26.647368 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647388 | orchestrator | 2026-02-08 06:02:26.647407 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:02:26.647422 | orchestrator | Sunday 08 February 2026 06:02:23 +0000 (0:00:00.211) 0:11:21.437 ******* 2026-02-08 06:02:26.647439 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.647450 | orchestrator | 2026-02-08 06:02:26.647461 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:02:26.647477 | orchestrator | Sunday 08 February 2026 06:02:24 +0000 (0:00:00.951) 0:11:22.388 ******* 2026-02-08 06:02:26.647495 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:26.647544 | orchestrator | 2026-02-08 06:02:26.647563 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:02:26.647590 | orchestrator | Sunday 08 February 2026 06:02:25 +0000 (0:00:01.358) 0:11:23.746 ******* 2026-02-08 06:02:26.647608 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-08 06:02:26.647627 | orchestrator | 2026-02-08 06:02:26.647646 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:02:26.647666 | orchestrator | Sunday 08 February 2026 06:02:25 +0000 (0:00:00.222) 0:11:23.969 ******* 2026-02-08 06:02:26.647709 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647721 | orchestrator | 2026-02-08 06:02:26.647732 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:02:26.647743 | orchestrator | Sunday 08 February 2026 06:02:26 +0000 (0:00:00.139) 0:11:24.108 ******* 2026-02-08 06:02:26.647753 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:26.647764 | orchestrator | 2026-02-08 06:02:26.647775 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:02:26.647798 | orchestrator | Sunday 08 February 2026 06:02:26 +0000 (0:00:00.165) 0:11:24.274 ******* 2026-02-08 06:02:41.177864 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:02:41.177967 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:02:41.177983 | orchestrator | 2026-02-08 06:02:41.177996 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:02:41.178008 | orchestrator | Sunday 08 February 2026 06:02:27 +0000 (0:00:00.843) 0:11:25.118 ******* 2026-02-08 06:02:41.178050 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:41.178065 | orchestrator | 2026-02-08 06:02:41.178077 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:02:41.178098 | orchestrator | Sunday 08 February 2026 06:02:27 +0000 (0:00:00.481) 0:11:25.599 ******* 2026-02-08 06:02:41.178109 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178122 | orchestrator | 2026-02-08 06:02:41.178133 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:02:41.178144 | orchestrator | Sunday 08 February 2026 06:02:28 +0000 (0:00:00.477) 0:11:26.077 ******* 2026-02-08 06:02:41.178155 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178188 | orchestrator | 2026-02-08 06:02:41.178200 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:02:41.178211 | orchestrator | Sunday 08 February 2026 06:02:28 +0000 (0:00:00.151) 0:11:26.229 ******* 2026-02-08 06:02:41.178222 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178233 | orchestrator | 2026-02-08 06:02:41.178244 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:02:41.178255 | orchestrator | Sunday 08 February 2026 06:02:28 +0000 (0:00:00.145) 0:11:26.374 ******* 2026-02-08 06:02:41.178266 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-08 06:02:41.178277 | orchestrator | 2026-02-08 06:02:41.178288 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:02:41.178298 | orchestrator | Sunday 08 February 2026 06:02:28 +0000 (0:00:00.209) 0:11:26.584 ******* 2026-02-08 06:02:41.178309 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:41.178320 | orchestrator | 2026-02-08 06:02:41.178331 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:02:41.178341 | orchestrator | Sunday 08 February 2026 06:02:29 +0000 (0:00:00.746) 0:11:27.330 ******* 2026-02-08 06:02:41.178352 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:02:41.178363 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:02:41.178374 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:02:41.178385 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178399 | orchestrator | 2026-02-08 06:02:41.178411 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:02:41.178424 | orchestrator | Sunday 08 February 2026 06:02:29 +0000 (0:00:00.170) 0:11:27.500 ******* 2026-02-08 06:02:41.178438 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178452 | orchestrator | 2026-02-08 06:02:41.178465 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:02:41.178478 | orchestrator | Sunday 08 February 2026 06:02:29 +0000 (0:00:00.138) 0:11:27.639 ******* 2026-02-08 06:02:41.178491 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178504 | orchestrator | 2026-02-08 06:02:41.178517 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:02:41.178530 | orchestrator | Sunday 08 February 2026 06:02:29 +0000 (0:00:00.161) 0:11:27.800 ******* 2026-02-08 06:02:41.178543 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178556 | orchestrator | 2026-02-08 06:02:41.178569 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:02:41.178582 | orchestrator | Sunday 08 February 2026 06:02:29 +0000 (0:00:00.164) 0:11:27.965 ******* 2026-02-08 06:02:41.178594 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178607 | orchestrator | 2026-02-08 06:02:41.178620 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:02:41.178633 | orchestrator | Sunday 08 February 2026 06:02:30 +0000 (0:00:00.153) 0:11:28.118 ******* 2026-02-08 06:02:41.178645 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178658 | orchestrator | 2026-02-08 06:02:41.178672 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:02:41.178715 | orchestrator | Sunday 08 February 2026 06:02:30 +0000 (0:00:00.163) 0:11:28.281 ******* 2026-02-08 06:02:41.178729 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:41.178741 | orchestrator | 2026-02-08 06:02:41.178752 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:02:41.178763 | orchestrator | Sunday 08 February 2026 06:02:31 +0000 (0:00:01.572) 0:11:29.854 ******* 2026-02-08 06:02:41.178774 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:41.178784 | orchestrator | 2026-02-08 06:02:41.178807 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:02:41.178818 | orchestrator | Sunday 08 February 2026 06:02:31 +0000 (0:00:00.145) 0:11:29.999 ******* 2026-02-08 06:02:41.178837 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-08 06:02:41.178848 | orchestrator | 2026-02-08 06:02:41.178859 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:02:41.178869 | orchestrator | Sunday 08 February 2026 06:02:32 +0000 (0:00:00.508) 0:11:30.508 ******* 2026-02-08 06:02:41.178879 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178891 | orchestrator | 2026-02-08 06:02:41.178901 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:02:41.178912 | orchestrator | Sunday 08 February 2026 06:02:32 +0000 (0:00:00.172) 0:11:30.681 ******* 2026-02-08 06:02:41.178923 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178933 | orchestrator | 2026-02-08 06:02:41.178944 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:02:41.178973 | orchestrator | Sunday 08 February 2026 06:02:32 +0000 (0:00:00.175) 0:11:30.857 ******* 2026-02-08 06:02:41.178984 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.178995 | orchestrator | 2026-02-08 06:02:41.179006 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:02:41.179016 | orchestrator | Sunday 08 February 2026 06:02:32 +0000 (0:00:00.153) 0:11:31.010 ******* 2026-02-08 06:02:41.179027 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179038 | orchestrator | 2026-02-08 06:02:41.179049 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:02:41.179059 | orchestrator | Sunday 08 February 2026 06:02:33 +0000 (0:00:00.160) 0:11:31.171 ******* 2026-02-08 06:02:41.179070 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179081 | orchestrator | 2026-02-08 06:02:41.179092 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:02:41.179102 | orchestrator | Sunday 08 February 2026 06:02:33 +0000 (0:00:00.167) 0:11:31.338 ******* 2026-02-08 06:02:41.179113 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179124 | orchestrator | 2026-02-08 06:02:41.179135 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:02:41.179145 | orchestrator | Sunday 08 February 2026 06:02:33 +0000 (0:00:00.158) 0:11:31.497 ******* 2026-02-08 06:02:41.179156 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179167 | orchestrator | 2026-02-08 06:02:41.179178 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:02:41.179188 | orchestrator | Sunday 08 February 2026 06:02:33 +0000 (0:00:00.159) 0:11:31.656 ******* 2026-02-08 06:02:41.179199 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179210 | orchestrator | 2026-02-08 06:02:41.179221 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:02:41.179231 | orchestrator | Sunday 08 February 2026 06:02:33 +0000 (0:00:00.179) 0:11:31.836 ******* 2026-02-08 06:02:41.179242 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:02:41.179253 | orchestrator | 2026-02-08 06:02:41.179263 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:02:41.179274 | orchestrator | Sunday 08 February 2026 06:02:34 +0000 (0:00:00.243) 0:11:32.079 ******* 2026-02-08 06:02:41.179285 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-08 06:02:41.179296 | orchestrator | 2026-02-08 06:02:41.179307 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:02:41.179317 | orchestrator | Sunday 08 February 2026 06:02:34 +0000 (0:00:00.509) 0:11:32.588 ******* 2026-02-08 06:02:41.179328 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-08 06:02:41.179340 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-08 06:02:41.179350 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-08 06:02:41.179361 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-08 06:02:41.179372 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-08 06:02:41.179389 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-08 06:02:41.179400 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-08 06:02:41.179411 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:02:41.179422 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:02:41.179432 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:02:41.179443 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:02:41.179454 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:02:41.179465 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:02:41.179476 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:02:41.179486 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-08 06:02:41.179497 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-08 06:02:41.179508 | orchestrator | 2026-02-08 06:02:41.179519 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:02:41.179529 | orchestrator | Sunday 08 February 2026 06:02:40 +0000 (0:00:05.653) 0:11:38.241 ******* 2026-02-08 06:02:41.179540 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179551 | orchestrator | 2026-02-08 06:02:41.179561 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:02:41.179572 | orchestrator | Sunday 08 February 2026 06:02:40 +0000 (0:00:00.145) 0:11:38.387 ******* 2026-02-08 06:02:41.179582 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179593 | orchestrator | 2026-02-08 06:02:41.179604 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:02:41.179615 | orchestrator | Sunday 08 February 2026 06:02:40 +0000 (0:00:00.138) 0:11:38.525 ******* 2026-02-08 06:02:41.179625 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179636 | orchestrator | 2026-02-08 06:02:41.179652 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:02:41.179663 | orchestrator | Sunday 08 February 2026 06:02:40 +0000 (0:00:00.137) 0:11:38.663 ******* 2026-02-08 06:02:41.179674 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179710 | orchestrator | 2026-02-08 06:02:41.179721 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:02:41.179731 | orchestrator | Sunday 08 February 2026 06:02:40 +0000 (0:00:00.139) 0:11:38.802 ******* 2026-02-08 06:02:41.179742 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179753 | orchestrator | 2026-02-08 06:02:41.179764 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:02:41.179774 | orchestrator | Sunday 08 February 2026 06:02:40 +0000 (0:00:00.139) 0:11:38.941 ******* 2026-02-08 06:02:41.179785 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:02:41.179796 | orchestrator | 2026-02-08 06:02:41.179806 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:02:41.179817 | orchestrator | Sunday 08 February 2026 06:02:41 +0000 (0:00:00.130) 0:11:39.072 ******* 2026-02-08 06:02:41.179835 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140333 | orchestrator | 2026-02-08 06:03:01.140461 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:03:01.140480 | orchestrator | Sunday 08 February 2026 06:02:41 +0000 (0:00:00.145) 0:11:39.217 ******* 2026-02-08 06:03:01.140492 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140505 | orchestrator | 2026-02-08 06:03:01.140517 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:03:01.140529 | orchestrator | Sunday 08 February 2026 06:02:41 +0000 (0:00:00.163) 0:11:39.381 ******* 2026-02-08 06:03:01.140541 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140552 | orchestrator | 2026-02-08 06:03:01.140563 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:03:01.140599 | orchestrator | Sunday 08 February 2026 06:02:41 +0000 (0:00:00.141) 0:11:39.523 ******* 2026-02-08 06:03:01.140611 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140622 | orchestrator | 2026-02-08 06:03:01.140634 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:03:01.140645 | orchestrator | Sunday 08 February 2026 06:02:41 +0000 (0:00:00.140) 0:11:39.663 ******* 2026-02-08 06:03:01.140656 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140667 | orchestrator | 2026-02-08 06:03:01.140678 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:03:01.140734 | orchestrator | Sunday 08 February 2026 06:02:41 +0000 (0:00:00.140) 0:11:39.804 ******* 2026-02-08 06:03:01.140746 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140757 | orchestrator | 2026-02-08 06:03:01.140768 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:03:01.140779 | orchestrator | Sunday 08 February 2026 06:02:42 +0000 (0:00:00.437) 0:11:40.242 ******* 2026-02-08 06:03:01.140790 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140804 | orchestrator | 2026-02-08 06:03:01.140818 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:03:01.140831 | orchestrator | Sunday 08 February 2026 06:02:42 +0000 (0:00:00.232) 0:11:40.475 ******* 2026-02-08 06:03:01.140844 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140858 | orchestrator | 2026-02-08 06:03:01.140871 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:03:01.140885 | orchestrator | Sunday 08 February 2026 06:02:42 +0000 (0:00:00.151) 0:11:40.626 ******* 2026-02-08 06:03:01.140900 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140912 | orchestrator | 2026-02-08 06:03:01.140926 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:03:01.140940 | orchestrator | Sunday 08 February 2026 06:02:42 +0000 (0:00:00.226) 0:11:40.852 ******* 2026-02-08 06:03:01.140953 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.140965 | orchestrator | 2026-02-08 06:03:01.140978 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:03:01.140992 | orchestrator | Sunday 08 February 2026 06:02:42 +0000 (0:00:00.131) 0:11:40.984 ******* 2026-02-08 06:03:01.141005 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141020 | orchestrator | 2026-02-08 06:03:01.141034 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:03:01.141047 | orchestrator | Sunday 08 February 2026 06:02:43 +0000 (0:00:00.170) 0:11:41.155 ******* 2026-02-08 06:03:01.141058 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141069 | orchestrator | 2026-02-08 06:03:01.141080 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:03:01.141091 | orchestrator | Sunday 08 February 2026 06:02:43 +0000 (0:00:00.140) 0:11:41.295 ******* 2026-02-08 06:03:01.141102 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141113 | orchestrator | 2026-02-08 06:03:01.141124 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:03:01.141135 | orchestrator | Sunday 08 February 2026 06:02:43 +0000 (0:00:00.148) 0:11:41.443 ******* 2026-02-08 06:03:01.141146 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141157 | orchestrator | 2026-02-08 06:03:01.141169 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:03:01.141180 | orchestrator | Sunday 08 February 2026 06:02:43 +0000 (0:00:00.141) 0:11:41.585 ******* 2026-02-08 06:03:01.141191 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141202 | orchestrator | 2026-02-08 06:03:01.141213 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:03:01.141224 | orchestrator | Sunday 08 February 2026 06:02:43 +0000 (0:00:00.145) 0:11:41.730 ******* 2026-02-08 06:03:01.141235 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 06:03:01.141255 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 06:03:01.141266 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 06:03:01.141293 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141305 | orchestrator | 2026-02-08 06:03:01.141316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:03:01.141327 | orchestrator | Sunday 08 February 2026 06:02:44 +0000 (0:00:00.402) 0:11:42.133 ******* 2026-02-08 06:03:01.141338 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 06:03:01.141350 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 06:03:01.141361 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 06:03:01.141372 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141383 | orchestrator | 2026-02-08 06:03:01.141394 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:03:01.141405 | orchestrator | Sunday 08 February 2026 06:02:44 +0000 (0:00:00.796) 0:11:42.929 ******* 2026-02-08 06:03:01.141416 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-08 06:03:01.141427 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-08 06:03:01.141438 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-08 06:03:01.141466 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141478 | orchestrator | 2026-02-08 06:03:01.141489 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:03:01.141500 | orchestrator | Sunday 08 February 2026 06:02:45 +0000 (0:00:00.765) 0:11:43.694 ******* 2026-02-08 06:03:01.141511 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141522 | orchestrator | 2026-02-08 06:03:01.141533 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:03:01.141544 | orchestrator | Sunday 08 February 2026 06:02:46 +0000 (0:00:00.466) 0:11:44.160 ******* 2026-02-08 06:03:01.141555 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-08 06:03:01.141567 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141578 | orchestrator | 2026-02-08 06:03:01.141589 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:03:01.141600 | orchestrator | Sunday 08 February 2026 06:02:46 +0000 (0:00:00.318) 0:11:44.479 ******* 2026-02-08 06:03:01.141611 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:03:01.141622 | orchestrator | 2026-02-08 06:03:01.141633 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-08 06:03:01.141644 | orchestrator | Sunday 08 February 2026 06:02:47 +0000 (0:00:00.817) 0:11:45.296 ******* 2026-02-08 06:03:01.141655 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:03:01.141666 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-08 06:03:01.141677 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:03:01.141706 | orchestrator | 2026-02-08 06:03:01.141717 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-08 06:03:01.141728 | orchestrator | Sunday 08 February 2026 06:02:47 +0000 (0:00:00.676) 0:11:45.972 ******* 2026-02-08 06:03:01.141739 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-08 06:03:01.141750 | orchestrator | 2026-02-08 06:03:01.141761 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-08 06:03:01.141772 | orchestrator | Sunday 08 February 2026 06:02:48 +0000 (0:00:00.204) 0:11:46.177 ******* 2026-02-08 06:03:01.141783 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:03:01.141794 | orchestrator | 2026-02-08 06:03:01.141805 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-08 06:03:01.141816 | orchestrator | Sunday 08 February 2026 06:02:48 +0000 (0:00:00.500) 0:11:46.677 ******* 2026-02-08 06:03:01.141827 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.141838 | orchestrator | 2026-02-08 06:03:01.141849 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-08 06:03:01.141867 | orchestrator | Sunday 08 February 2026 06:02:48 +0000 (0:00:00.142) 0:11:46.819 ******* 2026-02-08 06:03:01.141879 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:03:01.141890 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:03:01.141901 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:03:01.141911 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-08 06:03:01.141922 | orchestrator | 2026-02-08 06:03:01.141933 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-08 06:03:01.141944 | orchestrator | Sunday 08 February 2026 06:02:55 +0000 (0:00:07.003) 0:11:53.823 ******* 2026-02-08 06:03:01.141955 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:03:01.141966 | orchestrator | 2026-02-08 06:03:01.141977 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-08 06:03:01.141988 | orchestrator | Sunday 08 February 2026 06:02:55 +0000 (0:00:00.176) 0:11:54.000 ******* 2026-02-08 06:03:01.141999 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-08 06:03:01.142010 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-08 06:03:01.142085 | orchestrator | 2026-02-08 06:03:01.142105 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:03:01.142123 | orchestrator | Sunday 08 February 2026 06:02:58 +0000 (0:00:02.593) 0:11:56.593 ******* 2026-02-08 06:03:01.142142 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-08 06:03:01.142160 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-08 06:03:01.142179 | orchestrator | 2026-02-08 06:03:01.142190 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-08 06:03:01.142201 | orchestrator | Sunday 08 February 2026 06:02:59 +0000 (0:00:01.280) 0:11:57.874 ******* 2026-02-08 06:03:01.142212 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:03:01.142223 | orchestrator | 2026-02-08 06:03:01.142234 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-08 06:03:01.142245 | orchestrator | Sunday 08 February 2026 06:03:00 +0000 (0:00:00.604) 0:11:58.478 ******* 2026-02-08 06:03:01.142262 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.142273 | orchestrator | 2026-02-08 06:03:01.142284 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-08 06:03:01.142295 | orchestrator | Sunday 08 February 2026 06:03:00 +0000 (0:00:00.164) 0:11:58.643 ******* 2026-02-08 06:03:01.142306 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.142317 | orchestrator | 2026-02-08 06:03:01.142328 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-08 06:03:01.142339 | orchestrator | Sunday 08 February 2026 06:03:00 +0000 (0:00:00.139) 0:11:58.783 ******* 2026-02-08 06:03:01.142350 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-08 06:03:01.142361 | orchestrator | 2026-02-08 06:03:01.142371 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-08 06:03:01.142382 | orchestrator | Sunday 08 February 2026 06:03:00 +0000 (0:00:00.223) 0:11:59.006 ******* 2026-02-08 06:03:01.142393 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:01.142403 | orchestrator | 2026-02-08 06:03:01.142414 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-08 06:03:01.142435 | orchestrator | Sunday 08 February 2026 06:03:01 +0000 (0:00:00.167) 0:11:59.174 ******* 2026-02-08 06:03:18.814266 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:18.814362 | orchestrator | 2026-02-08 06:03:18.814377 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-08 06:03:18.814387 | orchestrator | Sunday 08 February 2026 06:03:01 +0000 (0:00:00.176) 0:11:59.351 ******* 2026-02-08 06:03:18.814396 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-08 06:03:18.814426 | orchestrator | 2026-02-08 06:03:18.814436 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-08 06:03:18.814445 | orchestrator | Sunday 08 February 2026 06:03:01 +0000 (0:00:00.220) 0:11:59.572 ******* 2026-02-08 06:03:18.814454 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:03:18.814463 | orchestrator | 2026-02-08 06:03:18.814472 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-08 06:03:18.814481 | orchestrator | Sunday 08 February 2026 06:03:02 +0000 (0:00:01.086) 0:12:00.658 ******* 2026-02-08 06:03:18.814490 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:03:18.814499 | orchestrator | 2026-02-08 06:03:18.814508 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-08 06:03:18.814516 | orchestrator | Sunday 08 February 2026 06:03:03 +0000 (0:00:00.925) 0:12:01.583 ******* 2026-02-08 06:03:18.814525 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:03:18.814535 | orchestrator | 2026-02-08 06:03:18.814544 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-08 06:03:18.814553 | orchestrator | Sunday 08 February 2026 06:03:04 +0000 (0:00:01.392) 0:12:02.976 ******* 2026-02-08 06:03:18.814562 | orchestrator | changed: [testbed-node-1] 2026-02-08 06:03:18.814571 | orchestrator | 2026-02-08 06:03:18.814579 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-08 06:03:18.814588 | orchestrator | Sunday 08 February 2026 06:03:07 +0000 (0:00:03.003) 0:12:05.979 ******* 2026-02-08 06:03:18.814597 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:03:18.814606 | orchestrator | 2026-02-08 06:03:18.814614 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-08 06:03:18.814623 | orchestrator | 2026-02-08 06:03:18.814632 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-08 06:03:18.814641 | orchestrator | Sunday 08 February 2026 06:03:08 +0000 (0:00:00.661) 0:12:06.640 ******* 2026-02-08 06:03:18.814649 | orchestrator | changed: [testbed-node-2] 2026-02-08 06:03:18.814658 | orchestrator | 2026-02-08 06:03:18.814667 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-08 06:03:18.814676 | orchestrator | Sunday 08 February 2026 06:03:10 +0000 (0:00:01.854) 0:12:08.494 ******* 2026-02-08 06:03:18.814684 | orchestrator | changed: [testbed-node-2] 2026-02-08 06:03:18.814738 | orchestrator | 2026-02-08 06:03:18.814748 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:03:18.814757 | orchestrator | Sunday 08 February 2026 06:03:11 +0000 (0:00:01.544) 0:12:10.039 ******* 2026-02-08 06:03:18.814766 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-08 06:03:18.814775 | orchestrator | 2026-02-08 06:03:18.814783 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:03:18.814792 | orchestrator | Sunday 08 February 2026 06:03:12 +0000 (0:00:00.245) 0:12:10.284 ******* 2026-02-08 06:03:18.814801 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:18.814811 | orchestrator | 2026-02-08 06:03:18.814822 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:03:18.814832 | orchestrator | Sunday 08 February 2026 06:03:12 +0000 (0:00:00.490) 0:12:10.775 ******* 2026-02-08 06:03:18.814847 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:18.814862 | orchestrator | 2026-02-08 06:03:18.814877 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:03:18.814892 | orchestrator | Sunday 08 February 2026 06:03:12 +0000 (0:00:00.139) 0:12:10.915 ******* 2026-02-08 06:03:18.814906 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:18.814921 | orchestrator | 2026-02-08 06:03:18.814937 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:03:18.814953 | orchestrator | Sunday 08 February 2026 06:03:13 +0000 (0:00:00.457) 0:12:11.372 ******* 2026-02-08 06:03:18.814968 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:18.814983 | orchestrator | 2026-02-08 06:03:18.814995 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:03:18.815005 | orchestrator | Sunday 08 February 2026 06:03:13 +0000 (0:00:00.149) 0:12:11.521 ******* 2026-02-08 06:03:18.815024 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:18.815035 | orchestrator | 2026-02-08 06:03:18.815044 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:03:18.815054 | orchestrator | Sunday 08 February 2026 06:03:13 +0000 (0:00:00.155) 0:12:11.677 ******* 2026-02-08 06:03:18.815064 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:18.815074 | orchestrator | 2026-02-08 06:03:18.815085 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:03:18.815108 | orchestrator | Sunday 08 February 2026 06:03:14 +0000 (0:00:00.471) 0:12:12.149 ******* 2026-02-08 06:03:18.815118 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:18.815130 | orchestrator | 2026-02-08 06:03:18.815141 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:03:18.815151 | orchestrator | Sunday 08 February 2026 06:03:14 +0000 (0:00:00.146) 0:12:12.295 ******* 2026-02-08 06:03:18.815162 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:18.815172 | orchestrator | 2026-02-08 06:03:18.815183 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:03:18.815193 | orchestrator | Sunday 08 February 2026 06:03:14 +0000 (0:00:00.161) 0:12:12.456 ******* 2026-02-08 06:03:18.815203 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:03:18.815211 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:03:18.815220 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 06:03:18.815229 | orchestrator | 2026-02-08 06:03:18.815238 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:03:18.815264 | orchestrator | Sunday 08 February 2026 06:03:15 +0000 (0:00:00.713) 0:12:13.170 ******* 2026-02-08 06:03:18.815273 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:18.815282 | orchestrator | 2026-02-08 06:03:18.815291 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:03:18.815300 | orchestrator | Sunday 08 February 2026 06:03:15 +0000 (0:00:00.296) 0:12:13.466 ******* 2026-02-08 06:03:18.815308 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:03:18.815317 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:03:18.815326 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 06:03:18.815334 | orchestrator | 2026-02-08 06:03:18.815343 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:03:18.815352 | orchestrator | Sunday 08 February 2026 06:03:17 +0000 (0:00:01.895) 0:12:15.361 ******* 2026-02-08 06:03:18.815363 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 06:03:18.815379 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 06:03:18.815391 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 06:03:18.815405 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:18.815418 | orchestrator | 2026-02-08 06:03:18.815430 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:03:18.815444 | orchestrator | Sunday 08 February 2026 06:03:17 +0000 (0:00:00.458) 0:12:15.820 ******* 2026-02-08 06:03:18.815460 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:03:18.815478 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:03:18.815493 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:03:18.815517 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:18.815531 | orchestrator | 2026-02-08 06:03:18.815546 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:03:18.815560 | orchestrator | Sunday 08 February 2026 06:03:18 +0000 (0:00:00.651) 0:12:16.472 ******* 2026-02-08 06:03:18.815577 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:18.815596 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:18.815611 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:18.815621 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:18.815630 | orchestrator | 2026-02-08 06:03:18.815645 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:03:18.815654 | orchestrator | Sunday 08 February 2026 06:03:18 +0000 (0:00:00.175) 0:12:16.648 ******* 2026-02-08 06:03:18.815674 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:03:15.995899', 'end': '2026-02-08 06:03:16.064385', 'delta': '0:00:00.068486', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:03:23.192085 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:03:16.591474', 'end': '2026-02-08 06:03:16.634839', 'delta': '0:00:00.043365', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:03:23.192216 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:03:17.121107', 'end': '2026-02-08 06:03:17.168009', 'delta': '0:00:00.046902', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:03:23.192270 | orchestrator | 2026-02-08 06:03:23.192285 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:03:23.192298 | orchestrator | Sunday 08 February 2026 06:03:18 +0000 (0:00:00.206) 0:12:16.854 ******* 2026-02-08 06:03:23.192309 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:23.192322 | orchestrator | 2026-02-08 06:03:23.192333 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:03:23.192344 | orchestrator | Sunday 08 February 2026 06:03:19 +0000 (0:00:00.312) 0:12:17.167 ******* 2026-02-08 06:03:23.192355 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.192367 | orchestrator | 2026-02-08 06:03:23.192378 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:03:23.192389 | orchestrator | Sunday 08 February 2026 06:03:19 +0000 (0:00:00.235) 0:12:17.403 ******* 2026-02-08 06:03:23.192400 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:23.192411 | orchestrator | 2026-02-08 06:03:23.192421 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:03:23.192432 | orchestrator | Sunday 08 February 2026 06:03:19 +0000 (0:00:00.157) 0:12:17.561 ******* 2026-02-08 06:03:23.192443 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:03:23.192454 | orchestrator | 2026-02-08 06:03:23.192465 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:03:23.192476 | orchestrator | Sunday 08 February 2026 06:03:20 +0000 (0:00:01.431) 0:12:18.992 ******* 2026-02-08 06:03:23.192487 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:23.192497 | orchestrator | 2026-02-08 06:03:23.192508 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:03:23.192519 | orchestrator | Sunday 08 February 2026 06:03:21 +0000 (0:00:00.482) 0:12:19.474 ******* 2026-02-08 06:03:23.192530 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.192541 | orchestrator | 2026-02-08 06:03:23.192552 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:03:23.192563 | orchestrator | Sunday 08 February 2026 06:03:21 +0000 (0:00:00.150) 0:12:19.625 ******* 2026-02-08 06:03:23.192573 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.192584 | orchestrator | 2026-02-08 06:03:23.192595 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:03:23.192606 | orchestrator | Sunday 08 February 2026 06:03:21 +0000 (0:00:00.256) 0:12:19.881 ******* 2026-02-08 06:03:23.192620 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.192633 | orchestrator | 2026-02-08 06:03:23.192664 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:03:23.192684 | orchestrator | Sunday 08 February 2026 06:03:21 +0000 (0:00:00.139) 0:12:20.021 ******* 2026-02-08 06:03:23.192756 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.192774 | orchestrator | 2026-02-08 06:03:23.192788 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:03:23.192801 | orchestrator | Sunday 08 February 2026 06:03:22 +0000 (0:00:00.165) 0:12:20.186 ******* 2026-02-08 06:03:23.192814 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.192827 | orchestrator | 2026-02-08 06:03:23.192840 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:03:23.192852 | orchestrator | Sunday 08 February 2026 06:03:22 +0000 (0:00:00.146) 0:12:20.333 ******* 2026-02-08 06:03:23.192864 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.192876 | orchestrator | 2026-02-08 06:03:23.192889 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:03:23.192906 | orchestrator | Sunday 08 February 2026 06:03:22 +0000 (0:00:00.162) 0:12:20.495 ******* 2026-02-08 06:03:23.192939 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.192954 | orchestrator | 2026-02-08 06:03:23.192967 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:03:23.192998 | orchestrator | Sunday 08 February 2026 06:03:22 +0000 (0:00:00.148) 0:12:20.644 ******* 2026-02-08 06:03:23.193009 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.193020 | orchestrator | 2026-02-08 06:03:23.193031 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:03:23.193042 | orchestrator | Sunday 08 February 2026 06:03:22 +0000 (0:00:00.151) 0:12:20.796 ******* 2026-02-08 06:03:23.193053 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.193064 | orchestrator | 2026-02-08 06:03:23.193075 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:03:23.193086 | orchestrator | Sunday 08 February 2026 06:03:22 +0000 (0:00:00.147) 0:12:20.943 ******* 2026-02-08 06:03:23.193098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:03:23.193113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:03:23.193125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:03:23.193140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:03:23.193161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:03:23.193182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:03:23.193211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:03:23.193245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f0c6f27', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:03:23.480347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:03:23.480444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:03:23.480461 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:23.480475 | orchestrator | 2026-02-08 06:03:23.480488 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:03:23.480501 | orchestrator | Sunday 08 February 2026 06:03:23 +0000 (0:00:00.286) 0:12:21.229 ******* 2026-02-08 06:03:23.480528 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:23.480566 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:23.480579 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:23.480591 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:23.480621 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:23.480634 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:23.480645 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:23.480673 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7f0c6f27', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1', 'scsi-SQEMU_QEMU_HARDDISK_7f0c6f27-797f-46da-82fd-067f06c1f72b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:23.480740 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:34.662998 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:03:34.663112 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.663129 | orchestrator | 2026-02-08 06:03:34.663142 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:03:34.663153 | orchestrator | Sunday 08 February 2026 06:03:23 +0000 (0:00:00.281) 0:12:21.511 ******* 2026-02-08 06:03:34.663163 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:34.663197 | orchestrator | 2026-02-08 06:03:34.663208 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:03:34.663218 | orchestrator | Sunday 08 February 2026 06:03:23 +0000 (0:00:00.446) 0:12:21.957 ******* 2026-02-08 06:03:34.663228 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:34.663237 | orchestrator | 2026-02-08 06:03:34.663247 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:03:34.663257 | orchestrator | Sunday 08 February 2026 06:03:24 +0000 (0:00:00.643) 0:12:22.601 ******* 2026-02-08 06:03:34.663267 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:34.663276 | orchestrator | 2026-02-08 06:03:34.663299 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:03:34.663309 | orchestrator | Sunday 08 February 2026 06:03:25 +0000 (0:00:00.471) 0:12:23.072 ******* 2026-02-08 06:03:34.663319 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.663328 | orchestrator | 2026-02-08 06:03:34.663338 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:03:34.663348 | orchestrator | Sunday 08 February 2026 06:03:25 +0000 (0:00:00.163) 0:12:23.235 ******* 2026-02-08 06:03:34.663358 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.663367 | orchestrator | 2026-02-08 06:03:34.663377 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:03:34.663387 | orchestrator | Sunday 08 February 2026 06:03:25 +0000 (0:00:00.284) 0:12:23.519 ******* 2026-02-08 06:03:34.663397 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.663406 | orchestrator | 2026-02-08 06:03:34.663416 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:03:34.663426 | orchestrator | Sunday 08 February 2026 06:03:25 +0000 (0:00:00.185) 0:12:23.705 ******* 2026-02-08 06:03:34.663435 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-08 06:03:34.663446 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-08 06:03:34.663455 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 06:03:34.663465 | orchestrator | 2026-02-08 06:03:34.663475 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:03:34.663485 | orchestrator | Sunday 08 February 2026 06:03:26 +0000 (0:00:00.720) 0:12:24.426 ******* 2026-02-08 06:03:34.663494 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-08 06:03:34.663504 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-08 06:03:34.663514 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-08 06:03:34.663524 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.663533 | orchestrator | 2026-02-08 06:03:34.663543 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:03:34.663553 | orchestrator | Sunday 08 February 2026 06:03:26 +0000 (0:00:00.197) 0:12:24.624 ******* 2026-02-08 06:03:34.663563 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.663573 | orchestrator | 2026-02-08 06:03:34.663583 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:03:34.663593 | orchestrator | Sunday 08 February 2026 06:03:26 +0000 (0:00:00.152) 0:12:24.776 ******* 2026-02-08 06:03:34.663602 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:03:34.663633 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:03:34.663643 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 06:03:34.663653 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:03:34.663662 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:03:34.663672 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:03:34.663682 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:03:34.663691 | orchestrator | 2026-02-08 06:03:34.663737 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:03:34.663747 | orchestrator | Sunday 08 February 2026 06:03:27 +0000 (0:00:01.145) 0:12:25.921 ******* 2026-02-08 06:03:34.663757 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:03:34.663767 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:03:34.663789 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 06:03:34.663799 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:03:34.663825 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:03:34.663836 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:03:34.663846 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:03:34.663855 | orchestrator | 2026-02-08 06:03:34.663865 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:03:34.663875 | orchestrator | Sunday 08 February 2026 06:03:29 +0000 (0:00:01.663) 0:12:27.585 ******* 2026-02-08 06:03:34.663885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-08 06:03:34.663895 | orchestrator | 2026-02-08 06:03:34.663905 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:03:34.663914 | orchestrator | Sunday 08 February 2026 06:03:29 +0000 (0:00:00.203) 0:12:27.789 ******* 2026-02-08 06:03:34.663924 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-08 06:03:34.663934 | orchestrator | 2026-02-08 06:03:34.663943 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:03:34.663953 | orchestrator | Sunday 08 February 2026 06:03:30 +0000 (0:00:00.514) 0:12:28.303 ******* 2026-02-08 06:03:34.663963 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:34.663973 | orchestrator | 2026-02-08 06:03:34.663982 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:03:34.663992 | orchestrator | Sunday 08 February 2026 06:03:30 +0000 (0:00:00.526) 0:12:28.829 ******* 2026-02-08 06:03:34.664002 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664011 | orchestrator | 2026-02-08 06:03:34.664021 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:03:34.664031 | orchestrator | Sunday 08 February 2026 06:03:30 +0000 (0:00:00.141) 0:12:28.971 ******* 2026-02-08 06:03:34.664045 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664055 | orchestrator | 2026-02-08 06:03:34.664065 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:03:34.664074 | orchestrator | Sunday 08 February 2026 06:03:31 +0000 (0:00:00.145) 0:12:29.117 ******* 2026-02-08 06:03:34.664084 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664094 | orchestrator | 2026-02-08 06:03:34.664103 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:03:34.664113 | orchestrator | Sunday 08 February 2026 06:03:31 +0000 (0:00:00.121) 0:12:29.238 ******* 2026-02-08 06:03:34.664123 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:34.664132 | orchestrator | 2026-02-08 06:03:34.664142 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:03:34.664151 | orchestrator | Sunday 08 February 2026 06:03:31 +0000 (0:00:00.562) 0:12:29.801 ******* 2026-02-08 06:03:34.664161 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664171 | orchestrator | 2026-02-08 06:03:34.664180 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:03:34.664190 | orchestrator | Sunday 08 February 2026 06:03:31 +0000 (0:00:00.144) 0:12:29.945 ******* 2026-02-08 06:03:34.664199 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664209 | orchestrator | 2026-02-08 06:03:34.664219 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:03:34.664235 | orchestrator | Sunday 08 February 2026 06:03:32 +0000 (0:00:00.151) 0:12:30.096 ******* 2026-02-08 06:03:34.664244 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:34.664254 | orchestrator | 2026-02-08 06:03:34.664264 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:03:34.664273 | orchestrator | Sunday 08 February 2026 06:03:32 +0000 (0:00:00.521) 0:12:30.618 ******* 2026-02-08 06:03:34.664283 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:34.664292 | orchestrator | 2026-02-08 06:03:34.664302 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:03:34.664312 | orchestrator | Sunday 08 February 2026 06:03:33 +0000 (0:00:00.577) 0:12:31.195 ******* 2026-02-08 06:03:34.664321 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664331 | orchestrator | 2026-02-08 06:03:34.664340 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:03:34.664350 | orchestrator | Sunday 08 February 2026 06:03:33 +0000 (0:00:00.149) 0:12:31.345 ******* 2026-02-08 06:03:34.664359 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:34.664369 | orchestrator | 2026-02-08 06:03:34.664378 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:03:34.664388 | orchestrator | Sunday 08 February 2026 06:03:33 +0000 (0:00:00.137) 0:12:31.483 ******* 2026-02-08 06:03:34.664397 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664407 | orchestrator | 2026-02-08 06:03:34.664417 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:03:34.664427 | orchestrator | Sunday 08 February 2026 06:03:33 +0000 (0:00:00.452) 0:12:31.935 ******* 2026-02-08 06:03:34.664436 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664446 | orchestrator | 2026-02-08 06:03:34.664455 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:03:34.664466 | orchestrator | Sunday 08 February 2026 06:03:34 +0000 (0:00:00.141) 0:12:32.077 ******* 2026-02-08 06:03:34.664484 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664500 | orchestrator | 2026-02-08 06:03:34.664517 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:03:34.664533 | orchestrator | Sunday 08 February 2026 06:03:34 +0000 (0:00:00.142) 0:12:32.219 ******* 2026-02-08 06:03:34.664549 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664582 | orchestrator | 2026-02-08 06:03:34.664600 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:03:34.664615 | orchestrator | Sunday 08 February 2026 06:03:34 +0000 (0:00:00.150) 0:12:32.369 ******* 2026-02-08 06:03:34.664630 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:34.664646 | orchestrator | 2026-02-08 06:03:34.664662 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:03:34.664678 | orchestrator | Sunday 08 February 2026 06:03:34 +0000 (0:00:00.150) 0:12:32.520 ******* 2026-02-08 06:03:34.664729 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:46.671042 | orchestrator | 2026-02-08 06:03:46.671126 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:03:46.671134 | orchestrator | Sunday 08 February 2026 06:03:34 +0000 (0:00:00.180) 0:12:32.701 ******* 2026-02-08 06:03:46.671140 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:46.671146 | orchestrator | 2026-02-08 06:03:46.671151 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:03:46.671156 | orchestrator | Sunday 08 February 2026 06:03:34 +0000 (0:00:00.156) 0:12:32.858 ******* 2026-02-08 06:03:46.671161 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:46.671166 | orchestrator | 2026-02-08 06:03:46.671171 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:03:46.671175 | orchestrator | Sunday 08 February 2026 06:03:35 +0000 (0:00:00.226) 0:12:33.084 ******* 2026-02-08 06:03:46.671180 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671186 | orchestrator | 2026-02-08 06:03:46.671191 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:03:46.671212 | orchestrator | Sunday 08 February 2026 06:03:35 +0000 (0:00:00.138) 0:12:33.223 ******* 2026-02-08 06:03:46.671217 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671222 | orchestrator | 2026-02-08 06:03:46.671226 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:03:46.671232 | orchestrator | Sunday 08 February 2026 06:03:35 +0000 (0:00:00.136) 0:12:33.359 ******* 2026-02-08 06:03:46.671236 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671241 | orchestrator | 2026-02-08 06:03:46.671246 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:03:46.671250 | orchestrator | Sunday 08 February 2026 06:03:35 +0000 (0:00:00.125) 0:12:33.485 ******* 2026-02-08 06:03:46.671255 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671260 | orchestrator | 2026-02-08 06:03:46.671264 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:03:46.671279 | orchestrator | Sunday 08 February 2026 06:03:35 +0000 (0:00:00.143) 0:12:33.629 ******* 2026-02-08 06:03:46.671284 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671288 | orchestrator | 2026-02-08 06:03:46.671293 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:03:46.671298 | orchestrator | Sunday 08 February 2026 06:03:35 +0000 (0:00:00.129) 0:12:33.758 ******* 2026-02-08 06:03:46.671302 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671307 | orchestrator | 2026-02-08 06:03:46.671311 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:03:46.671316 | orchestrator | Sunday 08 February 2026 06:03:36 +0000 (0:00:00.458) 0:12:34.217 ******* 2026-02-08 06:03:46.671321 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671325 | orchestrator | 2026-02-08 06:03:46.671330 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:03:46.671336 | orchestrator | Sunday 08 February 2026 06:03:36 +0000 (0:00:00.170) 0:12:34.388 ******* 2026-02-08 06:03:46.671340 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671345 | orchestrator | 2026-02-08 06:03:46.671349 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:03:46.671354 | orchestrator | Sunday 08 February 2026 06:03:36 +0000 (0:00:00.130) 0:12:34.518 ******* 2026-02-08 06:03:46.671359 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671363 | orchestrator | 2026-02-08 06:03:46.671368 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:03:46.671373 | orchestrator | Sunday 08 February 2026 06:03:36 +0000 (0:00:00.132) 0:12:34.651 ******* 2026-02-08 06:03:46.671377 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671382 | orchestrator | 2026-02-08 06:03:46.671386 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:03:46.671391 | orchestrator | Sunday 08 February 2026 06:03:36 +0000 (0:00:00.124) 0:12:34.775 ******* 2026-02-08 06:03:46.671396 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671400 | orchestrator | 2026-02-08 06:03:46.671405 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:03:46.671410 | orchestrator | Sunday 08 February 2026 06:03:36 +0000 (0:00:00.138) 0:12:34.914 ******* 2026-02-08 06:03:46.671414 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671419 | orchestrator | 2026-02-08 06:03:46.671424 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:03:46.671428 | orchestrator | Sunday 08 February 2026 06:03:37 +0000 (0:00:00.256) 0:12:35.170 ******* 2026-02-08 06:03:46.671433 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:46.671437 | orchestrator | 2026-02-08 06:03:46.671442 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:03:46.671447 | orchestrator | Sunday 08 February 2026 06:03:38 +0000 (0:00:00.928) 0:12:36.099 ******* 2026-02-08 06:03:46.671451 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:46.671456 | orchestrator | 2026-02-08 06:03:46.671461 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:03:46.671469 | orchestrator | Sunday 08 February 2026 06:03:39 +0000 (0:00:01.352) 0:12:37.452 ******* 2026-02-08 06:03:46.671474 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-08 06:03:46.671479 | orchestrator | 2026-02-08 06:03:46.671484 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:03:46.671489 | orchestrator | Sunday 08 February 2026 06:03:39 +0000 (0:00:00.206) 0:12:37.658 ******* 2026-02-08 06:03:46.671494 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671499 | orchestrator | 2026-02-08 06:03:46.671503 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:03:46.671508 | orchestrator | Sunday 08 February 2026 06:03:39 +0000 (0:00:00.138) 0:12:37.796 ******* 2026-02-08 06:03:46.671513 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671517 | orchestrator | 2026-02-08 06:03:46.671522 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:03:46.671527 | orchestrator | Sunday 08 February 2026 06:03:40 +0000 (0:00:00.461) 0:12:38.258 ******* 2026-02-08 06:03:46.671542 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:03:46.671547 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:03:46.671552 | orchestrator | 2026-02-08 06:03:46.671556 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:03:46.671561 | orchestrator | Sunday 08 February 2026 06:03:41 +0000 (0:00:00.873) 0:12:39.132 ******* 2026-02-08 06:03:46.671566 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:46.671570 | orchestrator | 2026-02-08 06:03:46.671575 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:03:46.671580 | orchestrator | Sunday 08 February 2026 06:03:41 +0000 (0:00:00.445) 0:12:39.577 ******* 2026-02-08 06:03:46.671584 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671589 | orchestrator | 2026-02-08 06:03:46.671593 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:03:46.671598 | orchestrator | Sunday 08 February 2026 06:03:41 +0000 (0:00:00.158) 0:12:39.735 ******* 2026-02-08 06:03:46.671603 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671607 | orchestrator | 2026-02-08 06:03:46.671613 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:03:46.671618 | orchestrator | Sunday 08 February 2026 06:03:41 +0000 (0:00:00.136) 0:12:39.872 ******* 2026-02-08 06:03:46.671624 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671630 | orchestrator | 2026-02-08 06:03:46.671636 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:03:46.671642 | orchestrator | Sunday 08 February 2026 06:03:41 +0000 (0:00:00.142) 0:12:40.015 ******* 2026-02-08 06:03:46.671650 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-08 06:03:46.671658 | orchestrator | 2026-02-08 06:03:46.671670 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:03:46.671683 | orchestrator | Sunday 08 February 2026 06:03:42 +0000 (0:00:00.222) 0:12:40.238 ******* 2026-02-08 06:03:46.671691 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:46.671739 | orchestrator | 2026-02-08 06:03:46.671748 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:03:46.671755 | orchestrator | Sunday 08 February 2026 06:03:42 +0000 (0:00:00.724) 0:12:40.962 ******* 2026-02-08 06:03:46.671763 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:03:46.671771 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:03:46.671778 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:03:46.671786 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671794 | orchestrator | 2026-02-08 06:03:46.671803 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:03:46.671815 | orchestrator | Sunday 08 February 2026 06:03:43 +0000 (0:00:00.154) 0:12:41.116 ******* 2026-02-08 06:03:46.671821 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671826 | orchestrator | 2026-02-08 06:03:46.671832 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:03:46.671837 | orchestrator | Sunday 08 February 2026 06:03:43 +0000 (0:00:00.123) 0:12:41.240 ******* 2026-02-08 06:03:46.671843 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671848 | orchestrator | 2026-02-08 06:03:46.671854 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:03:46.671859 | orchestrator | Sunday 08 February 2026 06:03:43 +0000 (0:00:00.198) 0:12:41.439 ******* 2026-02-08 06:03:46.671865 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671870 | orchestrator | 2026-02-08 06:03:46.671876 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:03:46.671882 | orchestrator | Sunday 08 February 2026 06:03:43 +0000 (0:00:00.147) 0:12:41.586 ******* 2026-02-08 06:03:46.671888 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671893 | orchestrator | 2026-02-08 06:03:46.671898 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:03:46.671904 | orchestrator | Sunday 08 February 2026 06:03:43 +0000 (0:00:00.452) 0:12:42.039 ******* 2026-02-08 06:03:46.671909 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671915 | orchestrator | 2026-02-08 06:03:46.671920 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:03:46.671926 | orchestrator | Sunday 08 February 2026 06:03:44 +0000 (0:00:00.170) 0:12:42.210 ******* 2026-02-08 06:03:46.671931 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:46.671936 | orchestrator | 2026-02-08 06:03:46.671942 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:03:46.671948 | orchestrator | Sunday 08 February 2026 06:03:45 +0000 (0:00:01.604) 0:12:43.815 ******* 2026-02-08 06:03:46.671953 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:46.671959 | orchestrator | 2026-02-08 06:03:46.671964 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:03:46.671970 | orchestrator | Sunday 08 February 2026 06:03:45 +0000 (0:00:00.170) 0:12:43.985 ******* 2026-02-08 06:03:46.671975 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-08 06:03:46.671979 | orchestrator | 2026-02-08 06:03:46.671984 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:03:46.671988 | orchestrator | Sunday 08 February 2026 06:03:46 +0000 (0:00:00.235) 0:12:44.220 ******* 2026-02-08 06:03:46.671993 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.671997 | orchestrator | 2026-02-08 06:03:46.672002 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:03:46.672007 | orchestrator | Sunday 08 February 2026 06:03:46 +0000 (0:00:00.162) 0:12:44.383 ******* 2026-02-08 06:03:46.672011 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.672016 | orchestrator | 2026-02-08 06:03:46.672020 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:03:46.672025 | orchestrator | Sunday 08 February 2026 06:03:46 +0000 (0:00:00.165) 0:12:44.548 ******* 2026-02-08 06:03:46.672030 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:46.672034 | orchestrator | 2026-02-08 06:03:46.672044 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:03:59.437728 | orchestrator | Sunday 08 February 2026 06:03:46 +0000 (0:00:00.160) 0:12:44.708 ******* 2026-02-08 06:03:59.437862 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.437890 | orchestrator | 2026-02-08 06:03:59.437912 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:03:59.437931 | orchestrator | Sunday 08 February 2026 06:03:46 +0000 (0:00:00.168) 0:12:44.877 ******* 2026-02-08 06:03:59.437947 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.437995 | orchestrator | 2026-02-08 06:03:59.438084 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:03:59.438108 | orchestrator | Sunday 08 February 2026 06:03:46 +0000 (0:00:00.170) 0:12:45.048 ******* 2026-02-08 06:03:59.438127 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.438146 | orchestrator | 2026-02-08 06:03:59.438164 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:03:59.438183 | orchestrator | Sunday 08 February 2026 06:03:47 +0000 (0:00:00.151) 0:12:45.200 ******* 2026-02-08 06:03:59.438200 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.438217 | orchestrator | 2026-02-08 06:03:59.438234 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:03:59.438252 | orchestrator | Sunday 08 February 2026 06:03:47 +0000 (0:00:00.146) 0:12:45.346 ******* 2026-02-08 06:03:59.438270 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.438289 | orchestrator | 2026-02-08 06:03:59.438306 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:03:59.438323 | orchestrator | Sunday 08 February 2026 06:03:47 +0000 (0:00:00.456) 0:12:45.803 ******* 2026-02-08 06:03:59.438340 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:03:59.438359 | orchestrator | 2026-02-08 06:03:59.438377 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:03:59.438411 | orchestrator | Sunday 08 February 2026 06:03:47 +0000 (0:00:00.237) 0:12:46.041 ******* 2026-02-08 06:03:59.438432 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-08 06:03:59.438450 | orchestrator | 2026-02-08 06:03:59.438467 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:03:59.438484 | orchestrator | Sunday 08 February 2026 06:03:48 +0000 (0:00:00.237) 0:12:46.278 ******* 2026-02-08 06:03:59.438500 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-08 06:03:59.438517 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-08 06:03:59.438533 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-08 06:03:59.438548 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-08 06:03:59.438566 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-08 06:03:59.438582 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-08 06:03:59.438600 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-08 06:03:59.438618 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:03:59.438637 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:03:59.438656 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:03:59.438675 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:03:59.438693 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:03:59.438736 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:03:59.438752 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:03:59.438768 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-08 06:03:59.438786 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-08 06:03:59.438805 | orchestrator | 2026-02-08 06:03:59.438823 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:03:59.438840 | orchestrator | Sunday 08 February 2026 06:03:53 +0000 (0:00:05.737) 0:12:52.016 ******* 2026-02-08 06:03:59.438857 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.438875 | orchestrator | 2026-02-08 06:03:59.438891 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:03:59.438909 | orchestrator | Sunday 08 February 2026 06:03:54 +0000 (0:00:00.146) 0:12:52.162 ******* 2026-02-08 06:03:59.438927 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.438945 | orchestrator | 2026-02-08 06:03:59.438962 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:03:59.438994 | orchestrator | Sunday 08 February 2026 06:03:54 +0000 (0:00:00.148) 0:12:52.311 ******* 2026-02-08 06:03:59.439011 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439029 | orchestrator | 2026-02-08 06:03:59.439048 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:03:59.439063 | orchestrator | Sunday 08 February 2026 06:03:54 +0000 (0:00:00.127) 0:12:52.439 ******* 2026-02-08 06:03:59.439078 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439092 | orchestrator | 2026-02-08 06:03:59.439105 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:03:59.439118 | orchestrator | Sunday 08 February 2026 06:03:54 +0000 (0:00:00.136) 0:12:52.576 ******* 2026-02-08 06:03:59.439132 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439147 | orchestrator | 2026-02-08 06:03:59.439163 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:03:59.439178 | orchestrator | Sunday 08 February 2026 06:03:54 +0000 (0:00:00.128) 0:12:52.704 ******* 2026-02-08 06:03:59.439193 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439207 | orchestrator | 2026-02-08 06:03:59.439220 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:03:59.439234 | orchestrator | Sunday 08 February 2026 06:03:54 +0000 (0:00:00.139) 0:12:52.844 ******* 2026-02-08 06:03:59.439248 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439262 | orchestrator | 2026-02-08 06:03:59.439299 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:03:59.439315 | orchestrator | Sunday 08 February 2026 06:03:54 +0000 (0:00:00.127) 0:12:52.972 ******* 2026-02-08 06:03:59.439328 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439343 | orchestrator | 2026-02-08 06:03:59.439438 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:03:59.439455 | orchestrator | Sunday 08 February 2026 06:03:55 +0000 (0:00:00.465) 0:12:53.437 ******* 2026-02-08 06:03:59.439469 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439483 | orchestrator | 2026-02-08 06:03:59.439498 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:03:59.439512 | orchestrator | Sunday 08 February 2026 06:03:55 +0000 (0:00:00.126) 0:12:53.564 ******* 2026-02-08 06:03:59.439526 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439539 | orchestrator | 2026-02-08 06:03:59.439553 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:03:59.439567 | orchestrator | Sunday 08 February 2026 06:03:55 +0000 (0:00:00.163) 0:12:53.727 ******* 2026-02-08 06:03:59.439581 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439596 | orchestrator | 2026-02-08 06:03:59.439610 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:03:59.439624 | orchestrator | Sunday 08 February 2026 06:03:55 +0000 (0:00:00.158) 0:12:53.886 ******* 2026-02-08 06:03:59.439637 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439651 | orchestrator | 2026-02-08 06:03:59.439664 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:03:59.439677 | orchestrator | Sunday 08 February 2026 06:03:55 +0000 (0:00:00.143) 0:12:54.030 ******* 2026-02-08 06:03:59.439691 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439724 | orchestrator | 2026-02-08 06:03:59.439748 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:03:59.439762 | orchestrator | Sunday 08 February 2026 06:03:56 +0000 (0:00:00.243) 0:12:54.273 ******* 2026-02-08 06:03:59.439775 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439790 | orchestrator | 2026-02-08 06:03:59.439803 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:03:59.439817 | orchestrator | Sunday 08 February 2026 06:03:56 +0000 (0:00:00.169) 0:12:54.443 ******* 2026-02-08 06:03:59.439830 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439855 | orchestrator | 2026-02-08 06:03:59.439868 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:03:59.439882 | orchestrator | Sunday 08 February 2026 06:03:56 +0000 (0:00:00.236) 0:12:54.680 ******* 2026-02-08 06:03:59.439896 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439912 | orchestrator | 2026-02-08 06:03:59.439924 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:03:59.439938 | orchestrator | Sunday 08 February 2026 06:03:56 +0000 (0:00:00.129) 0:12:54.809 ******* 2026-02-08 06:03:59.439952 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.439966 | orchestrator | 2026-02-08 06:03:59.439981 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:03:59.439996 | orchestrator | Sunday 08 February 2026 06:03:56 +0000 (0:00:00.128) 0:12:54.937 ******* 2026-02-08 06:03:59.440010 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.440023 | orchestrator | 2026-02-08 06:03:59.440036 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:03:59.440070 | orchestrator | Sunday 08 February 2026 06:03:57 +0000 (0:00:00.141) 0:12:55.079 ******* 2026-02-08 06:03:59.440086 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.440099 | orchestrator | 2026-02-08 06:03:59.440112 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:03:59.440124 | orchestrator | Sunday 08 February 2026 06:03:57 +0000 (0:00:00.138) 0:12:55.218 ******* 2026-02-08 06:03:59.440138 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.440151 | orchestrator | 2026-02-08 06:03:59.440164 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:03:59.440176 | orchestrator | Sunday 08 February 2026 06:03:57 +0000 (0:00:00.152) 0:12:55.370 ******* 2026-02-08 06:03:59.440189 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.440201 | orchestrator | 2026-02-08 06:03:59.440214 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:03:59.440227 | orchestrator | Sunday 08 February 2026 06:03:57 +0000 (0:00:00.137) 0:12:55.508 ******* 2026-02-08 06:03:59.440240 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 06:03:59.440252 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 06:03:59.440265 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 06:03:59.440278 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.440291 | orchestrator | 2026-02-08 06:03:59.440303 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:03:59.440316 | orchestrator | Sunday 08 February 2026 06:03:58 +0000 (0:00:01.086) 0:12:56.595 ******* 2026-02-08 06:03:59.440330 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 06:03:59.440343 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 06:03:59.440357 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 06:03:59.440372 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.440386 | orchestrator | 2026-02-08 06:03:59.440400 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:03:59.440434 | orchestrator | Sunday 08 February 2026 06:03:58 +0000 (0:00:00.426) 0:12:57.021 ******* 2026-02-08 06:03:59.440448 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-08 06:03:59.440461 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-08 06:03:59.440475 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-08 06:03:59.440488 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:03:59.440502 | orchestrator | 2026-02-08 06:03:59.440533 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:04:34.120417 | orchestrator | Sunday 08 February 2026 06:03:59 +0000 (0:00:00.453) 0:12:57.474 ******* 2026-02-08 06:04:34.120509 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:04:34.120541 | orchestrator | 2026-02-08 06:04:34.120553 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:04:34.120564 | orchestrator | Sunday 08 February 2026 06:03:59 +0000 (0:00:00.128) 0:12:57.603 ******* 2026-02-08 06:04:34.120574 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-08 06:04:34.120584 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:04:34.120594 | orchestrator | 2026-02-08 06:04:34.120605 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:04:34.120615 | orchestrator | Sunday 08 February 2026 06:03:59 +0000 (0:00:00.372) 0:12:57.976 ******* 2026-02-08 06:04:34.120624 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:04:34.120635 | orchestrator | 2026-02-08 06:04:34.120645 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-08 06:04:34.120655 | orchestrator | Sunday 08 February 2026 06:04:00 +0000 (0:00:00.867) 0:12:58.843 ******* 2026-02-08 06:04:34.120665 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:04:34.120676 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:04:34.120686 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-08 06:04:34.120696 | orchestrator | 2026-02-08 06:04:34.120706 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-08 06:04:34.120782 | orchestrator | Sunday 08 February 2026 06:04:01 +0000 (0:00:00.966) 0:12:59.809 ******* 2026-02-08 06:04:34.120793 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-08 06:04:34.120803 | orchestrator | 2026-02-08 06:04:34.120821 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-08 06:04:34.120832 | orchestrator | Sunday 08 February 2026 06:04:01 +0000 (0:00:00.230) 0:13:00.040 ******* 2026-02-08 06:04:34.120842 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:04:34.120851 | orchestrator | 2026-02-08 06:04:34.120861 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-08 06:04:34.120871 | orchestrator | Sunday 08 February 2026 06:04:02 +0000 (0:00:00.509) 0:13:00.549 ******* 2026-02-08 06:04:34.120880 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:04:34.120890 | orchestrator | 2026-02-08 06:04:34.120900 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-08 06:04:34.120910 | orchestrator | Sunday 08 February 2026 06:04:02 +0000 (0:00:00.138) 0:13:00.688 ******* 2026-02-08 06:04:34.120919 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:04:34.120929 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:04:34.120939 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:04:34.120949 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-08 06:04:34.120959 | orchestrator | 2026-02-08 06:04:34.120969 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-08 06:04:34.120980 | orchestrator | Sunday 08 February 2026 06:04:09 +0000 (0:00:07.013) 0:13:07.701 ******* 2026-02-08 06:04:34.120992 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:04:34.121003 | orchestrator | 2026-02-08 06:04:34.121015 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-08 06:04:34.121027 | orchestrator | Sunday 08 February 2026 06:04:09 +0000 (0:00:00.178) 0:13:07.880 ******* 2026-02-08 06:04:34.121039 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-08 06:04:34.121052 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-08 06:04:34.121064 | orchestrator | 2026-02-08 06:04:34.121076 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:04:34.121089 | orchestrator | Sunday 08 February 2026 06:04:12 +0000 (0:00:02.311) 0:13:10.192 ******* 2026-02-08 06:04:34.121100 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-08 06:04:34.121112 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-08 06:04:34.121123 | orchestrator | 2026-02-08 06:04:34.121143 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-08 06:04:34.121154 | orchestrator | Sunday 08 February 2026 06:04:13 +0000 (0:00:01.021) 0:13:11.214 ******* 2026-02-08 06:04:34.121166 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:04:34.121178 | orchestrator | 2026-02-08 06:04:34.121189 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-08 06:04:34.121200 | orchestrator | Sunday 08 February 2026 06:04:13 +0000 (0:00:00.497) 0:13:11.711 ******* 2026-02-08 06:04:34.121212 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:04:34.121223 | orchestrator | 2026-02-08 06:04:34.121234 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-08 06:04:34.121245 | orchestrator | Sunday 08 February 2026 06:04:13 +0000 (0:00:00.137) 0:13:11.848 ******* 2026-02-08 06:04:34.121254 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:04:34.121263 | orchestrator | 2026-02-08 06:04:34.121273 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-08 06:04:34.121282 | orchestrator | Sunday 08 February 2026 06:04:13 +0000 (0:00:00.124) 0:13:11.973 ******* 2026-02-08 06:04:34.121291 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-08 06:04:34.121300 | orchestrator | 2026-02-08 06:04:34.121311 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-08 06:04:34.121320 | orchestrator | Sunday 08 February 2026 06:04:14 +0000 (0:00:00.214) 0:13:12.187 ******* 2026-02-08 06:04:34.121330 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:04:34.121339 | orchestrator | 2026-02-08 06:04:34.121347 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-08 06:04:34.121355 | orchestrator | Sunday 08 February 2026 06:04:14 +0000 (0:00:00.164) 0:13:12.352 ******* 2026-02-08 06:04:34.121363 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:04:34.121371 | orchestrator | 2026-02-08 06:04:34.121393 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-08 06:04:34.121401 | orchestrator | Sunday 08 February 2026 06:04:14 +0000 (0:00:00.150) 0:13:12.503 ******* 2026-02-08 06:04:34.121409 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-08 06:04:34.121417 | orchestrator | 2026-02-08 06:04:34.121426 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-08 06:04:34.121433 | orchestrator | Sunday 08 February 2026 06:04:14 +0000 (0:00:00.199) 0:13:12.702 ******* 2026-02-08 06:04:34.121441 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:04:34.121449 | orchestrator | 2026-02-08 06:04:34.121458 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-08 06:04:34.121465 | orchestrator | Sunday 08 February 2026 06:04:15 +0000 (0:00:01.332) 0:13:14.035 ******* 2026-02-08 06:04:34.121473 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:04:34.121481 | orchestrator | 2026-02-08 06:04:34.121489 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-08 06:04:34.121497 | orchestrator | Sunday 08 February 2026 06:04:16 +0000 (0:00:00.939) 0:13:14.975 ******* 2026-02-08 06:04:34.121505 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:04:34.121513 | orchestrator | 2026-02-08 06:04:34.121521 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-08 06:04:34.121529 | orchestrator | Sunday 08 February 2026 06:04:18 +0000 (0:00:01.426) 0:13:16.401 ******* 2026-02-08 06:04:34.121537 | orchestrator | changed: [testbed-node-2] 2026-02-08 06:04:34.121545 | orchestrator | 2026-02-08 06:04:34.121553 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-08 06:04:34.121561 | orchestrator | Sunday 08 February 2026 06:04:21 +0000 (0:00:02.967) 0:13:19.368 ******* 2026-02-08 06:04:34.121569 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-08 06:04:34.121577 | orchestrator | 2026-02-08 06:04:34.121589 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-08 06:04:34.121597 | orchestrator | Sunday 08 February 2026 06:04:21 +0000 (0:00:00.624) 0:13:19.993 ******* 2026-02-08 06:04:34.121610 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:04:34.121618 | orchestrator | 2026-02-08 06:04:34.121626 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-08 06:04:34.121634 | orchestrator | Sunday 08 February 2026 06:04:23 +0000 (0:00:01.470) 0:13:21.464 ******* 2026-02-08 06:04:34.121643 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:04:34.121651 | orchestrator | 2026-02-08 06:04:34.121658 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-08 06:04:34.121666 | orchestrator | Sunday 08 February 2026 06:04:24 +0000 (0:00:01.428) 0:13:22.893 ******* 2026-02-08 06:04:34.121674 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:04:34.121682 | orchestrator | 2026-02-08 06:04:34.121690 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-08 06:04:34.121698 | orchestrator | Sunday 08 February 2026 06:04:25 +0000 (0:00:00.296) 0:13:23.189 ******* 2026-02-08 06:04:34.121706 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:04:34.121727 | orchestrator | 2026-02-08 06:04:34.121736 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-08 06:04:34.121744 | orchestrator | Sunday 08 February 2026 06:04:25 +0000 (0:00:00.164) 0:13:23.353 ******* 2026-02-08 06:04:34.121752 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-08 06:04:34.121761 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-08 06:04:34.121769 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:04:34.121777 | orchestrator | 2026-02-08 06:04:34.121785 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-08 06:04:34.121793 | orchestrator | Sunday 08 February 2026 06:04:25 +0000 (0:00:00.346) 0:13:23.700 ******* 2026-02-08 06:04:34.121801 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-08 06:04:34.121809 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-08 06:04:34.121817 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-08 06:04:34.121843 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-08 06:04:34.121856 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:04:34.121887 | orchestrator | 2026-02-08 06:04:34.121902 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-08 06:04:34.121914 | orchestrator | 2026-02-08 06:04:34.121927 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:04:34.121939 | orchestrator | Sunday 08 February 2026 06:04:27 +0000 (0:00:01.832) 0:13:25.532 ******* 2026-02-08 06:04:34.121952 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:04:34.121964 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:04:34.121976 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:04:34.121989 | orchestrator | 2026-02-08 06:04:34.122002 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:04:34.122052 | orchestrator | Sunday 08 February 2026 06:04:28 +0000 (0:00:00.645) 0:13:26.177 ******* 2026-02-08 06:04:34.122070 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:04:34.122082 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:04:34.122093 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:04:34.122107 | orchestrator | 2026-02-08 06:04:34.122120 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-08 06:04:34.122133 | orchestrator | Sunday 08 February 2026 06:04:28 +0000 (0:00:00.595) 0:13:26.773 ******* 2026-02-08 06:04:34.122146 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:04:34.122160 | orchestrator | 2026-02-08 06:04:34.122174 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-08 06:04:34.122189 | orchestrator | Sunday 08 February 2026 06:04:30 +0000 (0:00:02.080) 0:13:28.854 ******* 2026-02-08 06:04:34.122198 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:04:34.122206 | orchestrator | 2026-02-08 06:04:34.122217 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-08 06:04:34.122240 | orchestrator | Sunday 08 February 2026 06:04:33 +0000 (0:00:02.724) 0:13:31.578 ******* 2026-02-08 06:04:34.122278 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-08T03:52:25.021305+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:34.590837 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-08T03:53:38.829779+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:34.590975 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-08T03:53:42.961792+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '89', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 2.25, 'score_stable': 2.25, 'optimal_score': 1, 'raw_score_acting': 2.25, 'raw_score_stable': 2.25, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:34.591023 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-08T03:54:42.165370+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '67', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:34.591063 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-08T03:54:47.697267+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '69', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:34.591096 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-08T03:54:53.869876+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '69', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:35.443148 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-08T03:54:59.909121+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '190', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:35.443248 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-08T03:55:05.313024+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:35.443346 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-08T03:55:17.265817+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:35.443363 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-08T03:56:04.882654+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '103', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 103, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:04:35.443393 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-08T03:56:13.598964+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '110', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 110, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:05:57.108438 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-08T03:56:22.808835+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '202', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 202, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:05:57.108649 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-08T03:56:31.207697+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '125', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 125, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:05:57.108687 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-08T03:56:39.428492+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '133', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 133, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-08 06:05:57.108722 | orchestrator | 2026-02-08 06:05:57.108800 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-08 06:05:57.108823 | orchestrator | Sunday 08 February 2026 06:04:35 +0000 (0:00:01.911) 0:13:33.490 ******* 2026-02-08 06:05:57.108844 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:05:57.108862 | orchestrator | 2026-02-08 06:05:57.108880 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-08 06:05:57.108899 | orchestrator | Sunday 08 February 2026 06:04:37 +0000 (0:00:02.013) 0:13:35.503 ******* 2026-02-08 06:05:57.108918 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-08 06:05:57.108939 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-08 06:05:57.108958 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-08 06:05:57.108976 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-08 06:05:57.108996 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-08 06:05:57.109015 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-08 06:05:57.109034 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-08 06:05:57.109054 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-08 06:05:57.109084 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-08 06:05:57.109104 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-08 06:05:57.109123 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-08 06:05:57.109142 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-08 06:05:57.109160 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-08 06:05:57.109179 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-08 06:05:57.109198 | orchestrator | 2026-02-08 06:05:57.109218 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-08 06:05:57.109237 | orchestrator | Sunday 08 February 2026 06:05:52 +0000 (0:01:14.676) 0:14:50.180 ******* 2026-02-08 06:05:57.109269 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-08 06:06:03.864308 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-08 06:06:03.864485 | orchestrator | 2026-02-08 06:06:03.864506 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-08 06:06:03.864519 | orchestrator | 2026-02-08 06:06:03.864530 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:06:03.864542 | orchestrator | Sunday 08 February 2026 06:05:57 +0000 (0:00:04.963) 0:14:55.144 ******* 2026-02-08 06:06:03.864553 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-08 06:06:03.864564 | orchestrator | 2026-02-08 06:06:03.864575 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:06:03.864585 | orchestrator | Sunday 08 February 2026 06:05:57 +0000 (0:00:00.249) 0:14:55.394 ******* 2026-02-08 06:06:03.864620 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:03.864633 | orchestrator | 2026-02-08 06:06:03.864644 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:06:03.864656 | orchestrator | Sunday 08 February 2026 06:05:57 +0000 (0:00:00.449) 0:14:55.843 ******* 2026-02-08 06:06:03.864666 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:03.864677 | orchestrator | 2026-02-08 06:06:03.864688 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:06:03.864699 | orchestrator | Sunday 08 February 2026 06:05:57 +0000 (0:00:00.154) 0:14:55.998 ******* 2026-02-08 06:06:03.864710 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:03.864721 | orchestrator | 2026-02-08 06:06:03.864733 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:06:03.864775 | orchestrator | Sunday 08 February 2026 06:05:58 +0000 (0:00:00.433) 0:14:56.431 ******* 2026-02-08 06:06:03.864787 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:03.864798 | orchestrator | 2026-02-08 06:06:03.864809 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:06:03.864820 | orchestrator | Sunday 08 February 2026 06:05:58 +0000 (0:00:00.174) 0:14:56.605 ******* 2026-02-08 06:06:03.864831 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:03.864842 | orchestrator | 2026-02-08 06:06:03.864853 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:06:03.864864 | orchestrator | Sunday 08 February 2026 06:05:58 +0000 (0:00:00.167) 0:14:56.773 ******* 2026-02-08 06:06:03.864874 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:03.864889 | orchestrator | 2026-02-08 06:06:03.864910 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:06:03.864923 | orchestrator | Sunday 08 February 2026 06:05:59 +0000 (0:00:00.491) 0:14:57.264 ******* 2026-02-08 06:06:03.864934 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:03.864946 | orchestrator | 2026-02-08 06:06:03.864958 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:06:03.864969 | orchestrator | Sunday 08 February 2026 06:05:59 +0000 (0:00:00.160) 0:14:57.425 ******* 2026-02-08 06:06:03.864980 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:03.864991 | orchestrator | 2026-02-08 06:06:03.865010 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:06:03.865022 | orchestrator | Sunday 08 February 2026 06:05:59 +0000 (0:00:00.133) 0:14:57.558 ******* 2026-02-08 06:06:03.865084 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:06:03.865095 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:06:03.865106 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:06:03.865117 | orchestrator | 2026-02-08 06:06:03.865128 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:06:03.865138 | orchestrator | Sunday 08 February 2026 06:06:00 +0000 (0:00:00.700) 0:14:58.259 ******* 2026-02-08 06:06:03.865154 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:03.865170 | orchestrator | 2026-02-08 06:06:03.865181 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:06:03.865192 | orchestrator | Sunday 08 February 2026 06:06:00 +0000 (0:00:00.262) 0:14:58.521 ******* 2026-02-08 06:06:03.865202 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:06:03.865213 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:06:03.865224 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:06:03.865264 | orchestrator | 2026-02-08 06:06:03.865277 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:06:03.865288 | orchestrator | Sunday 08 February 2026 06:06:02 +0000 (0:00:01.910) 0:15:00.432 ******* 2026-02-08 06:06:03.865299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 06:06:03.865320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 06:06:03.865346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 06:06:03.865358 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:03.865369 | orchestrator | 2026-02-08 06:06:03.865380 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:06:03.865391 | orchestrator | Sunday 08 February 2026 06:06:02 +0000 (0:00:00.453) 0:15:00.886 ******* 2026-02-08 06:06:03.865403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:06:03.865417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:06:03.865449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:06:03.865462 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:03.865473 | orchestrator | 2026-02-08 06:06:03.865484 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:06:03.865495 | orchestrator | Sunday 08 February 2026 06:06:03 +0000 (0:00:00.618) 0:15:01.504 ******* 2026-02-08 06:06:03.865508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:03.865522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:03.865534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:03.865545 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:03.865556 | orchestrator | 2026-02-08 06:06:03.865568 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:06:03.865578 | orchestrator | Sunday 08 February 2026 06:06:03 +0000 (0:00:00.161) 0:15:01.666 ******* 2026-02-08 06:06:03.865591 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:06:01.052523', 'end': '2026-02-08 06:06:01.109733', 'delta': '0:00:00.057210', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:06:03.865619 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:06:01.641579', 'end': '2026-02-08 06:06:01.688666', 'delta': '0:00:00.047087', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:06:03.865641 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:06:02.194385', 'end': '2026-02-08 06:06:02.246419', 'delta': '0:00:00.052034', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:06:08.207465 | orchestrator | 2026-02-08 06:06:08.207575 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:06:08.207593 | orchestrator | Sunday 08 February 2026 06:06:03 +0000 (0:00:00.236) 0:15:01.903 ******* 2026-02-08 06:06:08.207605 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:08.207617 | orchestrator | 2026-02-08 06:06:08.207629 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:06:08.207640 | orchestrator | Sunday 08 February 2026 06:06:04 +0000 (0:00:00.294) 0:15:02.197 ******* 2026-02-08 06:06:08.207651 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:08.207664 | orchestrator | 2026-02-08 06:06:08.207675 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:06:08.207686 | orchestrator | Sunday 08 February 2026 06:06:04 +0000 (0:00:00.263) 0:15:02.461 ******* 2026-02-08 06:06:08.207697 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:08.207708 | orchestrator | 2026-02-08 06:06:08.207719 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:06:08.207729 | orchestrator | Sunday 08 February 2026 06:06:04 +0000 (0:00:00.159) 0:15:02.620 ******* 2026-02-08 06:06:08.207802 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:06:08.207828 | orchestrator | 2026-02-08 06:06:08.207841 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:06:08.207852 | orchestrator | Sunday 08 February 2026 06:06:06 +0000 (0:00:01.697) 0:15:04.317 ******* 2026-02-08 06:06:08.207862 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:08.207873 | orchestrator | 2026-02-08 06:06:08.207884 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:06:08.207896 | orchestrator | Sunday 08 February 2026 06:06:06 +0000 (0:00:00.163) 0:15:04.480 ******* 2026-02-08 06:06:08.207907 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:08.207918 | orchestrator | 2026-02-08 06:06:08.207929 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:06:08.207939 | orchestrator | Sunday 08 February 2026 06:06:06 +0000 (0:00:00.135) 0:15:04.616 ******* 2026-02-08 06:06:08.207950 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:08.207961 | orchestrator | 2026-02-08 06:06:08.207972 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:06:08.207985 | orchestrator | Sunday 08 February 2026 06:06:06 +0000 (0:00:00.255) 0:15:04.872 ******* 2026-02-08 06:06:08.208024 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:08.208040 | orchestrator | 2026-02-08 06:06:08.208052 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:06:08.208066 | orchestrator | Sunday 08 February 2026 06:06:06 +0000 (0:00:00.132) 0:15:05.004 ******* 2026-02-08 06:06:08.208078 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:08.208092 | orchestrator | 2026-02-08 06:06:08.208103 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:06:08.208114 | orchestrator | Sunday 08 February 2026 06:06:07 +0000 (0:00:00.133) 0:15:05.138 ******* 2026-02-08 06:06:08.208125 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:08.208136 | orchestrator | 2026-02-08 06:06:08.208146 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:06:08.208157 | orchestrator | Sunday 08 February 2026 06:06:07 +0000 (0:00:00.180) 0:15:05.318 ******* 2026-02-08 06:06:08.208168 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:08.208183 | orchestrator | 2026-02-08 06:06:08.208200 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:06:08.208211 | orchestrator | Sunday 08 February 2026 06:06:07 +0000 (0:00:00.144) 0:15:05.463 ******* 2026-02-08 06:06:08.208222 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:08.208233 | orchestrator | 2026-02-08 06:06:08.208244 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:06:08.208255 | orchestrator | Sunday 08 February 2026 06:06:07 +0000 (0:00:00.225) 0:15:05.689 ******* 2026-02-08 06:06:08.208266 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:08.208277 | orchestrator | 2026-02-08 06:06:08.208288 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:06:08.208300 | orchestrator | Sunday 08 February 2026 06:06:07 +0000 (0:00:00.133) 0:15:05.823 ******* 2026-02-08 06:06:08.208311 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:08.208322 | orchestrator | 2026-02-08 06:06:08.208333 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:06:08.208344 | orchestrator | Sunday 08 February 2026 06:06:07 +0000 (0:00:00.179) 0:15:06.002 ******* 2026-02-08 06:06:08.208372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:06:08.208409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'uuids': ['f792a4bc-313c-4001-af18-36958a398c99'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH']}})  2026-02-08 06:06:08.208424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b3b2ead', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:06:08.208437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4']}})  2026-02-08 06:06:08.208456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:06:08.208469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:06:08.208481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:06:08.208499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:06:08.208511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc', 'dm-uuid-CRYPT-LUKS2-b54d8ff93c624789820c72a73c143a76-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:06:08.208532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:06:08.572903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'uuids': ['b54d8ff9-3c62-4789-820c-72a73c143a76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc']}})  2026-02-08 06:06:08.573044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757']}})  2026-02-08 06:06:08.573062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:06:08.573103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8eb95c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:06:08.573154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:06:08.573185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:06:08.573202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH', 'dm-uuid-CRYPT-LUKS2-f792a4bc313c4001af1836958a398c99-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:06:08.573221 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:08.573239 | orchestrator | 2026-02-08 06:06:08.573257 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:06:08.573274 | orchestrator | Sunday 08 February 2026 06:06:08 +0000 (0:00:00.387) 0:15:06.390 ******* 2026-02-08 06:06:08.573294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.573321 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'uuids': ['f792a4bc-313c-4001-af18-36958a398c99'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.573334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b3b2ead', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.573365 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762372 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762528 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc', 'dm-uuid-CRYPT-LUKS2-b54d8ff93c624789820c72a73c143a76-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762593 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'uuids': ['b54d8ff9-3c62-4789-820c-72a73c143a76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762606 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762626 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:08.762647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8eb95c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:17.195088 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:17.195175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:17.195196 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH', 'dm-uuid-CRYPT-LUKS2-f792a4bc313c4001af1836958a398c99-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:06:17.195204 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195215 | orchestrator | 2026-02-08 06:06:17.195224 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:06:17.195250 | orchestrator | Sunday 08 February 2026 06:06:08 +0000 (0:00:00.414) 0:15:06.804 ******* 2026-02-08 06:06:17.195259 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:17.195267 | orchestrator | 2026-02-08 06:06:17.195275 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:06:17.195283 | orchestrator | Sunday 08 February 2026 06:06:09 +0000 (0:00:00.784) 0:15:07.589 ******* 2026-02-08 06:06:17.195291 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:17.195300 | orchestrator | 2026-02-08 06:06:17.195308 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:06:17.195317 | orchestrator | Sunday 08 February 2026 06:06:09 +0000 (0:00:00.134) 0:15:07.723 ******* 2026-02-08 06:06:17.195324 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:17.195329 | orchestrator | 2026-02-08 06:06:17.195334 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:06:17.195339 | orchestrator | Sunday 08 February 2026 06:06:10 +0000 (0:00:00.477) 0:15:08.201 ******* 2026-02-08 06:06:17.195344 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195349 | orchestrator | 2026-02-08 06:06:17.195353 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:06:17.195358 | orchestrator | Sunday 08 February 2026 06:06:10 +0000 (0:00:00.150) 0:15:08.351 ******* 2026-02-08 06:06:17.195363 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195368 | orchestrator | 2026-02-08 06:06:17.195373 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:06:17.195378 | orchestrator | Sunday 08 February 2026 06:06:10 +0000 (0:00:00.218) 0:15:08.569 ******* 2026-02-08 06:06:17.195383 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195388 | orchestrator | 2026-02-08 06:06:17.195393 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:06:17.195397 | orchestrator | Sunday 08 February 2026 06:06:10 +0000 (0:00:00.161) 0:15:08.731 ******* 2026-02-08 06:06:17.195403 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-08 06:06:17.195408 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-08 06:06:17.195413 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-08 06:06:17.195418 | orchestrator | 2026-02-08 06:06:17.195423 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:06:17.195428 | orchestrator | Sunday 08 February 2026 06:06:11 +0000 (0:00:00.722) 0:15:09.453 ******* 2026-02-08 06:06:17.195433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 06:06:17.195438 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 06:06:17.195443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 06:06:17.195448 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195453 | orchestrator | 2026-02-08 06:06:17.195458 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:06:17.195463 | orchestrator | Sunday 08 February 2026 06:06:11 +0000 (0:00:00.176) 0:15:09.630 ******* 2026-02-08 06:06:17.195480 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-08 06:06:17.195486 | orchestrator | 2026-02-08 06:06:17.195492 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:06:17.195498 | orchestrator | Sunday 08 February 2026 06:06:11 +0000 (0:00:00.242) 0:15:09.872 ******* 2026-02-08 06:06:17.195503 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195508 | orchestrator | 2026-02-08 06:06:17.195513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:06:17.195518 | orchestrator | Sunday 08 February 2026 06:06:11 +0000 (0:00:00.126) 0:15:09.999 ******* 2026-02-08 06:06:17.195523 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195527 | orchestrator | 2026-02-08 06:06:17.195532 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:06:17.195542 | orchestrator | Sunday 08 February 2026 06:06:12 +0000 (0:00:00.146) 0:15:10.145 ******* 2026-02-08 06:06:17.195547 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195552 | orchestrator | 2026-02-08 06:06:17.195557 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:06:17.195562 | orchestrator | Sunday 08 February 2026 06:06:12 +0000 (0:00:00.139) 0:15:10.285 ******* 2026-02-08 06:06:17.195568 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:17.195576 | orchestrator | 2026-02-08 06:06:17.195584 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:06:17.195591 | orchestrator | Sunday 08 February 2026 06:06:12 +0000 (0:00:00.540) 0:15:10.825 ******* 2026-02-08 06:06:17.195599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:06:17.195607 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:06:17.195614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:06:17.195623 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195631 | orchestrator | 2026-02-08 06:06:17.195639 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:06:17.195647 | orchestrator | Sunday 08 February 2026 06:06:13 +0000 (0:00:00.397) 0:15:11.222 ******* 2026-02-08 06:06:17.195656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:06:17.195669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:06:17.195677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:06:17.195685 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195693 | orchestrator | 2026-02-08 06:06:17.195702 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:06:17.195710 | orchestrator | Sunday 08 February 2026 06:06:13 +0000 (0:00:00.402) 0:15:11.625 ******* 2026-02-08 06:06:17.195719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:06:17.195728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:06:17.195736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:06:17.195764 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:17.195771 | orchestrator | 2026-02-08 06:06:17.195777 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:06:17.195785 | orchestrator | Sunday 08 February 2026 06:06:13 +0000 (0:00:00.401) 0:15:12.027 ******* 2026-02-08 06:06:17.195793 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:17.195801 | orchestrator | 2026-02-08 06:06:17.195809 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:06:17.195817 | orchestrator | Sunday 08 February 2026 06:06:14 +0000 (0:00:00.169) 0:15:12.196 ******* 2026-02-08 06:06:17.195825 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 06:06:17.195833 | orchestrator | 2026-02-08 06:06:17.195841 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:06:17.195848 | orchestrator | Sunday 08 February 2026 06:06:14 +0000 (0:00:00.391) 0:15:12.588 ******* 2026-02-08 06:06:17.195857 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:06:17.195866 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:06:17.195875 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:06:17.195884 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 06:06:17.195892 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:06:17.195900 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:06:17.195908 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:06:17.195917 | orchestrator | 2026-02-08 06:06:17.195925 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:06:17.195942 | orchestrator | Sunday 08 February 2026 06:06:15 +0000 (0:00:00.905) 0:15:13.493 ******* 2026-02-08 06:06:17.195951 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:06:17.195958 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:06:17.195967 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:06:17.195976 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 06:06:17.195985 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:06:17.195994 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:06:17.196002 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:06:17.196010 | orchestrator | 2026-02-08 06:06:17.196028 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-08 06:06:32.508325 | orchestrator | Sunday 08 February 2026 06:06:17 +0000 (0:00:01.739) 0:15:15.232 ******* 2026-02-08 06:06:32.508458 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.508478 | orchestrator | 2026-02-08 06:06:32.508492 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-08 06:06:32.508504 | orchestrator | Sunday 08 February 2026 06:06:17 +0000 (0:00:00.476) 0:15:15.709 ******* 2026-02-08 06:06:32.508515 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.508526 | orchestrator | 2026-02-08 06:06:32.508537 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-08 06:06:32.508548 | orchestrator | Sunday 08 February 2026 06:06:17 +0000 (0:00:00.159) 0:15:15.868 ******* 2026-02-08 06:06:32.508559 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.508570 | orchestrator | 2026-02-08 06:06:32.508581 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-08 06:06:32.508592 | orchestrator | Sunday 08 February 2026 06:06:18 +0000 (0:00:00.272) 0:15:16.141 ******* 2026-02-08 06:06:32.508603 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-08 06:06:32.508616 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-08 06:06:32.508627 | orchestrator | 2026-02-08 06:06:32.508638 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:06:32.508649 | orchestrator | Sunday 08 February 2026 06:06:21 +0000 (0:00:03.051) 0:15:19.192 ******* 2026-02-08 06:06:32.508659 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-08 06:06:32.508670 | orchestrator | 2026-02-08 06:06:32.508682 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:06:32.508693 | orchestrator | Sunday 08 February 2026 06:06:21 +0000 (0:00:00.534) 0:15:19.727 ******* 2026-02-08 06:06:32.508704 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-08 06:06:32.508715 | orchestrator | 2026-02-08 06:06:32.508725 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:06:32.508736 | orchestrator | Sunday 08 February 2026 06:06:21 +0000 (0:00:00.231) 0:15:19.959 ******* 2026-02-08 06:06:32.508747 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.508798 | orchestrator | 2026-02-08 06:06:32.508810 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:06:32.508839 | orchestrator | Sunday 08 February 2026 06:06:22 +0000 (0:00:00.145) 0:15:20.104 ******* 2026-02-08 06:06:32.508852 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.508866 | orchestrator | 2026-02-08 06:06:32.508881 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:06:32.508894 | orchestrator | Sunday 08 February 2026 06:06:22 +0000 (0:00:00.517) 0:15:20.622 ******* 2026-02-08 06:06:32.508905 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.508916 | orchestrator | 2026-02-08 06:06:32.508927 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:06:32.508961 | orchestrator | Sunday 08 February 2026 06:06:23 +0000 (0:00:00.568) 0:15:21.191 ******* 2026-02-08 06:06:32.508972 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.508983 | orchestrator | 2026-02-08 06:06:32.508994 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:06:32.509005 | orchestrator | Sunday 08 February 2026 06:06:23 +0000 (0:00:00.552) 0:15:21.743 ******* 2026-02-08 06:06:32.509016 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509027 | orchestrator | 2026-02-08 06:06:32.509039 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:06:32.509058 | orchestrator | Sunday 08 February 2026 06:06:23 +0000 (0:00:00.126) 0:15:21.869 ******* 2026-02-08 06:06:32.509075 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509093 | orchestrator | 2026-02-08 06:06:32.509111 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:06:32.509128 | orchestrator | Sunday 08 February 2026 06:06:23 +0000 (0:00:00.134) 0:15:22.004 ******* 2026-02-08 06:06:32.509146 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509163 | orchestrator | 2026-02-08 06:06:32.509181 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:06:32.509199 | orchestrator | Sunday 08 February 2026 06:06:24 +0000 (0:00:00.162) 0:15:22.166 ******* 2026-02-08 06:06:32.509218 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.509236 | orchestrator | 2026-02-08 06:06:32.509256 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:06:32.509273 | orchestrator | Sunday 08 February 2026 06:06:24 +0000 (0:00:00.554) 0:15:22.721 ******* 2026-02-08 06:06:32.509292 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.509310 | orchestrator | 2026-02-08 06:06:32.509328 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:06:32.509340 | orchestrator | Sunday 08 February 2026 06:06:25 +0000 (0:00:00.557) 0:15:23.278 ******* 2026-02-08 06:06:32.509350 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509361 | orchestrator | 2026-02-08 06:06:32.509372 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:06:32.509383 | orchestrator | Sunday 08 February 2026 06:06:25 +0000 (0:00:00.476) 0:15:23.754 ******* 2026-02-08 06:06:32.509393 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509404 | orchestrator | 2026-02-08 06:06:32.509415 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:06:32.509426 | orchestrator | Sunday 08 February 2026 06:06:25 +0000 (0:00:00.145) 0:15:23.900 ******* 2026-02-08 06:06:32.509437 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.509448 | orchestrator | 2026-02-08 06:06:32.509458 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:06:32.509469 | orchestrator | Sunday 08 February 2026 06:06:26 +0000 (0:00:00.173) 0:15:24.073 ******* 2026-02-08 06:06:32.509480 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.509490 | orchestrator | 2026-02-08 06:06:32.509501 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:06:32.509512 | orchestrator | Sunday 08 February 2026 06:06:26 +0000 (0:00:00.175) 0:15:24.249 ******* 2026-02-08 06:06:32.509523 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.509533 | orchestrator | 2026-02-08 06:06:32.509565 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:06:32.509577 | orchestrator | Sunday 08 February 2026 06:06:26 +0000 (0:00:00.170) 0:15:24.419 ******* 2026-02-08 06:06:32.509587 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509598 | orchestrator | 2026-02-08 06:06:32.509609 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:06:32.509620 | orchestrator | Sunday 08 February 2026 06:06:26 +0000 (0:00:00.178) 0:15:24.598 ******* 2026-02-08 06:06:32.509631 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509642 | orchestrator | 2026-02-08 06:06:32.509652 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:06:32.509674 | orchestrator | Sunday 08 February 2026 06:06:26 +0000 (0:00:00.160) 0:15:24.758 ******* 2026-02-08 06:06:32.509685 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509696 | orchestrator | 2026-02-08 06:06:32.509706 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:06:32.509717 | orchestrator | Sunday 08 February 2026 06:06:26 +0000 (0:00:00.163) 0:15:24.921 ******* 2026-02-08 06:06:32.509728 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.509739 | orchestrator | 2026-02-08 06:06:32.509807 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:06:32.509829 | orchestrator | Sunday 08 February 2026 06:06:27 +0000 (0:00:00.179) 0:15:25.101 ******* 2026-02-08 06:06:32.509842 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.509853 | orchestrator | 2026-02-08 06:06:32.509863 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:06:32.509874 | orchestrator | Sunday 08 February 2026 06:06:27 +0000 (0:00:00.236) 0:15:25.337 ******* 2026-02-08 06:06:32.509885 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509896 | orchestrator | 2026-02-08 06:06:32.509907 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:06:32.509917 | orchestrator | Sunday 08 February 2026 06:06:27 +0000 (0:00:00.134) 0:15:25.472 ******* 2026-02-08 06:06:32.509928 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509939 | orchestrator | 2026-02-08 06:06:32.509950 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:06:32.509961 | orchestrator | Sunday 08 February 2026 06:06:27 +0000 (0:00:00.126) 0:15:25.598 ******* 2026-02-08 06:06:32.509971 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.509982 | orchestrator | 2026-02-08 06:06:32.510000 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:06:32.510011 | orchestrator | Sunday 08 February 2026 06:06:27 +0000 (0:00:00.170) 0:15:25.769 ******* 2026-02-08 06:06:32.510076 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.510087 | orchestrator | 2026-02-08 06:06:32.510098 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:06:32.510109 | orchestrator | Sunday 08 February 2026 06:06:28 +0000 (0:00:00.681) 0:15:26.451 ******* 2026-02-08 06:06:32.510120 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.510131 | orchestrator | 2026-02-08 06:06:32.510142 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:06:32.510153 | orchestrator | Sunday 08 February 2026 06:06:28 +0000 (0:00:00.138) 0:15:26.589 ******* 2026-02-08 06:06:32.510164 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.510175 | orchestrator | 2026-02-08 06:06:32.510195 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:06:32.510206 | orchestrator | Sunday 08 February 2026 06:06:28 +0000 (0:00:00.145) 0:15:26.735 ******* 2026-02-08 06:06:32.510220 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.510239 | orchestrator | 2026-02-08 06:06:32.510257 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:06:32.510276 | orchestrator | Sunday 08 February 2026 06:06:28 +0000 (0:00:00.161) 0:15:26.896 ******* 2026-02-08 06:06:32.510294 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.510311 | orchestrator | 2026-02-08 06:06:32.510329 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:06:32.510346 | orchestrator | Sunday 08 February 2026 06:06:28 +0000 (0:00:00.138) 0:15:27.035 ******* 2026-02-08 06:06:32.510362 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.510380 | orchestrator | 2026-02-08 06:06:32.510396 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:06:32.510413 | orchestrator | Sunday 08 February 2026 06:06:29 +0000 (0:00:00.175) 0:15:27.210 ******* 2026-02-08 06:06:32.510430 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.510446 | orchestrator | 2026-02-08 06:06:32.510462 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:06:32.510491 | orchestrator | Sunday 08 February 2026 06:06:29 +0000 (0:00:00.186) 0:15:27.396 ******* 2026-02-08 06:06:32.510509 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.510525 | orchestrator | 2026-02-08 06:06:32.510542 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:06:32.510559 | orchestrator | Sunday 08 February 2026 06:06:29 +0000 (0:00:00.165) 0:15:27.562 ******* 2026-02-08 06:06:32.510576 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:32.510594 | orchestrator | 2026-02-08 06:06:32.510609 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:06:32.510625 | orchestrator | Sunday 08 February 2026 06:06:29 +0000 (0:00:00.209) 0:15:27.772 ******* 2026-02-08 06:06:32.510642 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.510659 | orchestrator | 2026-02-08 06:06:32.510677 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:06:32.510695 | orchestrator | Sunday 08 February 2026 06:06:30 +0000 (0:00:00.965) 0:15:28.738 ******* 2026-02-08 06:06:32.510712 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:32.510730 | orchestrator | 2026-02-08 06:06:32.510770 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:06:32.510788 | orchestrator | Sunday 08 February 2026 06:06:31 +0000 (0:00:01.260) 0:15:29.999 ******* 2026-02-08 06:06:32.510804 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-08 06:06:32.510823 | orchestrator | 2026-02-08 06:06:32.510858 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:06:48.282122 | orchestrator | Sunday 08 February 2026 06:06:32 +0000 (0:00:00.544) 0:15:30.543 ******* 2026-02-08 06:06:48.282220 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282231 | orchestrator | 2026-02-08 06:06:48.282240 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:06:48.282247 | orchestrator | Sunday 08 February 2026 06:06:32 +0000 (0:00:00.161) 0:15:30.705 ******* 2026-02-08 06:06:48.282253 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282259 | orchestrator | 2026-02-08 06:06:48.282265 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:06:48.282272 | orchestrator | Sunday 08 February 2026 06:06:32 +0000 (0:00:00.146) 0:15:30.852 ******* 2026-02-08 06:06:48.282278 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:06:48.282284 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:06:48.282291 | orchestrator | 2026-02-08 06:06:48.282297 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:06:48.282303 | orchestrator | Sunday 08 February 2026 06:06:33 +0000 (0:00:00.799) 0:15:31.651 ******* 2026-02-08 06:06:48.282308 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:48.282315 | orchestrator | 2026-02-08 06:06:48.282320 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:06:48.282326 | orchestrator | Sunday 08 February 2026 06:06:34 +0000 (0:00:00.492) 0:15:32.144 ******* 2026-02-08 06:06:48.282332 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282338 | orchestrator | 2026-02-08 06:06:48.282344 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:06:48.282350 | orchestrator | Sunday 08 February 2026 06:06:34 +0000 (0:00:00.161) 0:15:32.305 ******* 2026-02-08 06:06:48.282356 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282362 | orchestrator | 2026-02-08 06:06:48.282368 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:06:48.282374 | orchestrator | Sunday 08 February 2026 06:06:34 +0000 (0:00:00.166) 0:15:32.472 ******* 2026-02-08 06:06:48.282381 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282386 | orchestrator | 2026-02-08 06:06:48.282392 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:06:48.282412 | orchestrator | Sunday 08 February 2026 06:06:34 +0000 (0:00:00.135) 0:15:32.607 ******* 2026-02-08 06:06:48.282435 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-08 06:06:48.282442 | orchestrator | 2026-02-08 06:06:48.282447 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:06:48.282453 | orchestrator | Sunday 08 February 2026 06:06:34 +0000 (0:00:00.252) 0:15:32.859 ******* 2026-02-08 06:06:48.282458 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:48.282464 | orchestrator | 2026-02-08 06:06:48.282469 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:06:48.282475 | orchestrator | Sunday 08 February 2026 06:06:35 +0000 (0:00:00.732) 0:15:33.592 ******* 2026-02-08 06:06:48.282481 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:06:48.282487 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:06:48.282492 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:06:48.282498 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282504 | orchestrator | 2026-02-08 06:06:48.282510 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:06:48.282516 | orchestrator | Sunday 08 February 2026 06:06:35 +0000 (0:00:00.161) 0:15:33.753 ******* 2026-02-08 06:06:48.282522 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282527 | orchestrator | 2026-02-08 06:06:48.282533 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:06:48.282539 | orchestrator | Sunday 08 February 2026 06:06:35 +0000 (0:00:00.152) 0:15:33.906 ******* 2026-02-08 06:06:48.282545 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282551 | orchestrator | 2026-02-08 06:06:48.282557 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:06:48.282563 | orchestrator | Sunday 08 February 2026 06:06:36 +0000 (0:00:00.502) 0:15:34.408 ******* 2026-02-08 06:06:48.282568 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282574 | orchestrator | 2026-02-08 06:06:48.282579 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:06:48.282585 | orchestrator | Sunday 08 February 2026 06:06:36 +0000 (0:00:00.173) 0:15:34.581 ******* 2026-02-08 06:06:48.282590 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282596 | orchestrator | 2026-02-08 06:06:48.282601 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:06:48.282607 | orchestrator | Sunday 08 February 2026 06:06:36 +0000 (0:00:00.172) 0:15:34.754 ******* 2026-02-08 06:06:48.282612 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282618 | orchestrator | 2026-02-08 06:06:48.282624 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:06:48.282630 | orchestrator | Sunday 08 February 2026 06:06:36 +0000 (0:00:00.204) 0:15:34.959 ******* 2026-02-08 06:06:48.282636 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:48.282642 | orchestrator | 2026-02-08 06:06:48.282648 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:06:48.282654 | orchestrator | Sunday 08 February 2026 06:06:38 +0000 (0:00:01.503) 0:15:36.462 ******* 2026-02-08 06:06:48.282661 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:48.282667 | orchestrator | 2026-02-08 06:06:48.282674 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:06:48.282679 | orchestrator | Sunday 08 February 2026 06:06:38 +0000 (0:00:00.188) 0:15:36.650 ******* 2026-02-08 06:06:48.282685 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-08 06:06:48.282691 | orchestrator | 2026-02-08 06:06:48.282713 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:06:48.282719 | orchestrator | Sunday 08 February 2026 06:06:38 +0000 (0:00:00.258) 0:15:36.908 ******* 2026-02-08 06:06:48.282725 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282739 | orchestrator | 2026-02-08 06:06:48.282745 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:06:48.282788 | orchestrator | Sunday 08 February 2026 06:06:39 +0000 (0:00:00.175) 0:15:37.084 ******* 2026-02-08 06:06:48.282796 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282803 | orchestrator | 2026-02-08 06:06:48.282809 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:06:48.282815 | orchestrator | Sunday 08 February 2026 06:06:39 +0000 (0:00:00.137) 0:15:37.222 ******* 2026-02-08 06:06:48.282821 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282827 | orchestrator | 2026-02-08 06:06:48.282833 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:06:48.282839 | orchestrator | Sunday 08 February 2026 06:06:39 +0000 (0:00:00.163) 0:15:37.386 ******* 2026-02-08 06:06:48.282845 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282851 | orchestrator | 2026-02-08 06:06:48.282857 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:06:48.282863 | orchestrator | Sunday 08 February 2026 06:06:39 +0000 (0:00:00.143) 0:15:37.530 ******* 2026-02-08 06:06:48.282869 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282875 | orchestrator | 2026-02-08 06:06:48.282880 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:06:48.282886 | orchestrator | Sunday 08 February 2026 06:06:39 +0000 (0:00:00.152) 0:15:37.682 ******* 2026-02-08 06:06:48.282892 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282898 | orchestrator | 2026-02-08 06:06:48.282904 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:06:48.282910 | orchestrator | Sunday 08 February 2026 06:06:40 +0000 (0:00:00.479) 0:15:38.161 ******* 2026-02-08 06:06:48.282917 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282922 | orchestrator | 2026-02-08 06:06:48.282928 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:06:48.282934 | orchestrator | Sunday 08 February 2026 06:06:40 +0000 (0:00:00.140) 0:15:38.302 ******* 2026-02-08 06:06:48.282940 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:06:48.282946 | orchestrator | 2026-02-08 06:06:48.282958 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:06:48.282964 | orchestrator | Sunday 08 February 2026 06:06:40 +0000 (0:00:00.221) 0:15:38.523 ******* 2026-02-08 06:06:48.282970 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:06:48.282977 | orchestrator | 2026-02-08 06:06:48.282983 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:06:48.282989 | orchestrator | Sunday 08 February 2026 06:06:40 +0000 (0:00:00.260) 0:15:38.784 ******* 2026-02-08 06:06:48.282996 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-08 06:06:48.283002 | orchestrator | 2026-02-08 06:06:48.283008 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:06:48.283014 | orchestrator | Sunday 08 February 2026 06:06:40 +0000 (0:00:00.207) 0:15:38.991 ******* 2026-02-08 06:06:48.283020 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-08 06:06:48.283027 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-08 06:06:48.283033 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-08 06:06:48.283039 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-08 06:06:48.283045 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-08 06:06:48.283051 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-08 06:06:48.283056 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-08 06:06:48.283062 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:06:48.283069 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:06:48.283074 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:06:48.283080 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:06:48.283091 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:06:48.283097 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:06:48.283102 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:06:48.283108 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-08 06:06:48.283114 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-08 06:06:48.283119 | orchestrator | 2026-02-08 06:06:48.283125 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:06:48.283132 | orchestrator | Sunday 08 February 2026 06:06:46 +0000 (0:00:05.404) 0:15:44.395 ******* 2026-02-08 06:06:48.283138 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-08 06:06:48.283144 | orchestrator | 2026-02-08 06:06:48.283150 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-08 06:06:48.283156 | orchestrator | Sunday 08 February 2026 06:06:46 +0000 (0:00:00.557) 0:15:44.953 ******* 2026-02-08 06:06:48.283161 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:06:48.283168 | orchestrator | 2026-02-08 06:06:48.283174 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-08 06:06:48.283180 | orchestrator | Sunday 08 February 2026 06:06:47 +0000 (0:00:00.483) 0:15:45.437 ******* 2026-02-08 06:06:48.283187 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:06:48.283192 | orchestrator | 2026-02-08 06:06:48.283206 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:07:07.849995 | orchestrator | Sunday 08 February 2026 06:06:48 +0000 (0:00:00.883) 0:15:46.320 ******* 2026-02-08 06:07:07.850208 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850237 | orchestrator | 2026-02-08 06:07:07.850258 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:07:07.850278 | orchestrator | Sunday 08 February 2026 06:06:48 +0000 (0:00:00.121) 0:15:46.441 ******* 2026-02-08 06:07:07.850297 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850315 | orchestrator | 2026-02-08 06:07:07.850335 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:07:07.850354 | orchestrator | Sunday 08 February 2026 06:06:48 +0000 (0:00:00.122) 0:15:46.564 ******* 2026-02-08 06:07:07.850373 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850392 | orchestrator | 2026-02-08 06:07:07.850404 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:07:07.850416 | orchestrator | Sunday 08 February 2026 06:06:48 +0000 (0:00:00.439) 0:15:47.003 ******* 2026-02-08 06:07:07.850427 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850438 | orchestrator | 2026-02-08 06:07:07.850449 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:07:07.850461 | orchestrator | Sunday 08 February 2026 06:06:49 +0000 (0:00:00.160) 0:15:47.164 ******* 2026-02-08 06:07:07.850472 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850484 | orchestrator | 2026-02-08 06:07:07.850495 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:07:07.850508 | orchestrator | Sunday 08 February 2026 06:06:49 +0000 (0:00:00.139) 0:15:47.303 ******* 2026-02-08 06:07:07.850521 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850536 | orchestrator | 2026-02-08 06:07:07.850550 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:07:07.850563 | orchestrator | Sunday 08 February 2026 06:06:49 +0000 (0:00:00.134) 0:15:47.438 ******* 2026-02-08 06:07:07.850576 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850590 | orchestrator | 2026-02-08 06:07:07.850601 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:07:07.850638 | orchestrator | Sunday 08 February 2026 06:06:49 +0000 (0:00:00.140) 0:15:47.579 ******* 2026-02-08 06:07:07.850650 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850661 | orchestrator | 2026-02-08 06:07:07.850671 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:07:07.850682 | orchestrator | Sunday 08 February 2026 06:06:49 +0000 (0:00:00.138) 0:15:47.717 ******* 2026-02-08 06:07:07.850693 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850704 | orchestrator | 2026-02-08 06:07:07.850715 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:07:07.850725 | orchestrator | Sunday 08 February 2026 06:06:49 +0000 (0:00:00.141) 0:15:47.859 ******* 2026-02-08 06:07:07.850736 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.850747 | orchestrator | 2026-02-08 06:07:07.850803 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:07:07.850818 | orchestrator | Sunday 08 February 2026 06:06:50 +0000 (0:00:00.200) 0:15:48.060 ******* 2026-02-08 06:07:07.850829 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:07.850840 | orchestrator | 2026-02-08 06:07:07.850851 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:07:07.850862 | orchestrator | Sunday 08 February 2026 06:06:50 +0000 (0:00:00.229) 0:15:48.290 ******* 2026-02-08 06:07:07.850873 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-08 06:07:07.850884 | orchestrator | 2026-02-08 06:07:07.850895 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:07:07.850906 | orchestrator | Sunday 08 February 2026 06:06:53 +0000 (0:00:03.546) 0:15:51.836 ******* 2026-02-08 06:07:07.850917 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:07:07.850929 | orchestrator | 2026-02-08 06:07:07.850940 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:07:07.850951 | orchestrator | Sunday 08 February 2026 06:06:53 +0000 (0:00:00.191) 0:15:52.028 ******* 2026-02-08 06:07:07.850965 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-08 06:07:07.850979 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-08 06:07:07.850992 | orchestrator | 2026-02-08 06:07:07.851003 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:07:07.851014 | orchestrator | Sunday 08 February 2026 06:07:00 +0000 (0:00:06.822) 0:15:58.850 ******* 2026-02-08 06:07:07.851074 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.851088 | orchestrator | 2026-02-08 06:07:07.851099 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:07:07.851110 | orchestrator | Sunday 08 February 2026 06:07:00 +0000 (0:00:00.154) 0:15:59.004 ******* 2026-02-08 06:07:07.851120 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.851140 | orchestrator | 2026-02-08 06:07:07.851173 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:07:07.851185 | orchestrator | Sunday 08 February 2026 06:07:01 +0000 (0:00:00.476) 0:15:59.481 ******* 2026-02-08 06:07:07.851196 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.851207 | orchestrator | 2026-02-08 06:07:07.851219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:07:07.851244 | orchestrator | Sunday 08 February 2026 06:07:01 +0000 (0:00:00.178) 0:15:59.660 ******* 2026-02-08 06:07:07.851263 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.851281 | orchestrator | 2026-02-08 06:07:07.851298 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:07:07.851316 | orchestrator | Sunday 08 February 2026 06:07:01 +0000 (0:00:00.212) 0:15:59.873 ******* 2026-02-08 06:07:07.851334 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.851352 | orchestrator | 2026-02-08 06:07:07.851369 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:07:07.851387 | orchestrator | Sunday 08 February 2026 06:07:01 +0000 (0:00:00.161) 0:16:00.034 ******* 2026-02-08 06:07:07.851404 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:07.851423 | orchestrator | 2026-02-08 06:07:07.851442 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:07:07.851458 | orchestrator | Sunday 08 February 2026 06:07:02 +0000 (0:00:00.276) 0:16:00.311 ******* 2026-02-08 06:07:07.851475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:07:07.851493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:07:07.851509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:07:07.851527 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.851545 | orchestrator | 2026-02-08 06:07:07.851561 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:07:07.851579 | orchestrator | Sunday 08 February 2026 06:07:02 +0000 (0:00:00.463) 0:16:00.775 ******* 2026-02-08 06:07:07.851597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:07:07.851614 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:07:07.851632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:07:07.851681 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.851703 | orchestrator | 2026-02-08 06:07:07.851721 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:07:07.851739 | orchestrator | Sunday 08 February 2026 06:07:03 +0000 (0:00:00.415) 0:16:01.190 ******* 2026-02-08 06:07:07.851757 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:07:07.851808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:07:07.851827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:07:07.851845 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.851864 | orchestrator | 2026-02-08 06:07:07.851884 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:07:07.851901 | orchestrator | Sunday 08 February 2026 06:07:03 +0000 (0:00:00.468) 0:16:01.659 ******* 2026-02-08 06:07:07.851919 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:07.851937 | orchestrator | 2026-02-08 06:07:07.851957 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:07:07.851975 | orchestrator | Sunday 08 February 2026 06:07:03 +0000 (0:00:00.174) 0:16:01.833 ******* 2026-02-08 06:07:07.851993 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 06:07:07.852010 | orchestrator | 2026-02-08 06:07:07.852029 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:07:07.852048 | orchestrator | Sunday 08 February 2026 06:07:04 +0000 (0:00:00.425) 0:16:02.258 ******* 2026-02-08 06:07:07.852065 | orchestrator | changed: [testbed-node-3] 2026-02-08 06:07:07.852083 | orchestrator | 2026-02-08 06:07:07.852100 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-08 06:07:07.852117 | orchestrator | Sunday 08 February 2026 06:07:05 +0000 (0:00:00.851) 0:16:03.109 ******* 2026-02-08 06:07:07.852135 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:07.852152 | orchestrator | 2026-02-08 06:07:07.852169 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-08 06:07:07.852188 | orchestrator | Sunday 08 February 2026 06:07:05 +0000 (0:00:00.675) 0:16:03.785 ******* 2026-02-08 06:07:07.852224 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:07:07.852244 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:07:07.852264 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:07:07.852282 | orchestrator | 2026-02-08 06:07:07.852300 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-08 06:07:07.852318 | orchestrator | Sunday 08 February 2026 06:07:06 +0000 (0:00:00.714) 0:16:04.499 ******* 2026-02-08 06:07:07.852336 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-08 06:07:07.852356 | orchestrator | 2026-02-08 06:07:07.852370 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-08 06:07:07.852389 | orchestrator | Sunday 08 February 2026 06:07:07 +0000 (0:00:00.588) 0:16:05.088 ******* 2026-02-08 06:07:07.852406 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.852425 | orchestrator | 2026-02-08 06:07:07.852445 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-08 06:07:07.852464 | orchestrator | Sunday 08 February 2026 06:07:07 +0000 (0:00:00.160) 0:16:05.249 ******* 2026-02-08 06:07:07.852483 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:07.852501 | orchestrator | 2026-02-08 06:07:07.852520 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-08 06:07:07.852538 | orchestrator | Sunday 08 February 2026 06:07:07 +0000 (0:00:00.138) 0:16:05.387 ******* 2026-02-08 06:07:07.852552 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:07.852562 | orchestrator | 2026-02-08 06:07:07.852595 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-08 06:07:50.081376 | orchestrator | Sunday 08 February 2026 06:07:07 +0000 (0:00:00.500) 0:16:05.888 ******* 2026-02-08 06:07:50.081505 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:50.081524 | orchestrator | 2026-02-08 06:07:50.081537 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-08 06:07:50.081549 | orchestrator | Sunday 08 February 2026 06:07:08 +0000 (0:00:00.192) 0:16:06.080 ******* 2026-02-08 06:07:50.081561 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-08 06:07:50.081573 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-08 06:07:50.081585 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-08 06:07:50.081597 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-08 06:07:50.081609 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-08 06:07:50.081620 | orchestrator | 2026-02-08 06:07:50.081631 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-08 06:07:50.081642 | orchestrator | Sunday 08 February 2026 06:07:11 +0000 (0:00:03.018) 0:16:09.099 ******* 2026-02-08 06:07:50.081653 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.081665 | orchestrator | 2026-02-08 06:07:50.081676 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-08 06:07:50.081687 | orchestrator | Sunday 08 February 2026 06:07:11 +0000 (0:00:00.133) 0:16:09.233 ******* 2026-02-08 06:07:50.081698 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-08 06:07:50.081709 | orchestrator | 2026-02-08 06:07:50.081720 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-08 06:07:50.081731 | orchestrator | Sunday 08 February 2026 06:07:11 +0000 (0:00:00.634) 0:16:09.867 ******* 2026-02-08 06:07:50.081742 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-08 06:07:50.081753 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-08 06:07:50.081764 | orchestrator | 2026-02-08 06:07:50.081857 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-08 06:07:50.081893 | orchestrator | Sunday 08 February 2026 06:07:12 +0000 (0:00:00.814) 0:16:10.682 ******* 2026-02-08 06:07:50.081905 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:07:50.081916 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 06:07:50.081930 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:07:50.081944 | orchestrator | 2026-02-08 06:07:50.081957 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:07:50.081971 | orchestrator | Sunday 08 February 2026 06:07:15 +0000 (0:00:03.022) 0:16:13.704 ******* 2026-02-08 06:07:50.081984 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-08 06:07:50.081997 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 06:07:50.082010 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:50.082077 | orchestrator | 2026-02-08 06:07:50.082091 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-08 06:07:50.082105 | orchestrator | Sunday 08 February 2026 06:07:16 +0000 (0:00:00.988) 0:16:14.692 ******* 2026-02-08 06:07:50.082118 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.082131 | orchestrator | 2026-02-08 06:07:50.082142 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-08 06:07:50.082153 | orchestrator | Sunday 08 February 2026 06:07:16 +0000 (0:00:00.223) 0:16:14.916 ******* 2026-02-08 06:07:50.082164 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.082176 | orchestrator | 2026-02-08 06:07:50.082187 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-08 06:07:50.082198 | orchestrator | Sunday 08 February 2026 06:07:17 +0000 (0:00:00.141) 0:16:15.057 ******* 2026-02-08 06:07:50.082209 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.082219 | orchestrator | 2026-02-08 06:07:50.082230 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-08 06:07:50.082241 | orchestrator | Sunday 08 February 2026 06:07:17 +0000 (0:00:00.138) 0:16:15.196 ******* 2026-02-08 06:07:50.082252 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-08 06:07:50.082263 | orchestrator | 2026-02-08 06:07:50.082274 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-08 06:07:50.082285 | orchestrator | Sunday 08 February 2026 06:07:17 +0000 (0:00:00.617) 0:16:15.813 ******* 2026-02-08 06:07:50.082296 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:50.082307 | orchestrator | 2026-02-08 06:07:50.082318 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-08 06:07:50.082329 | orchestrator | Sunday 08 February 2026 06:07:18 +0000 (0:00:00.476) 0:16:16.289 ******* 2026-02-08 06:07:50.082340 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:50.082351 | orchestrator | 2026-02-08 06:07:50.082361 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-08 06:07:50.082372 | orchestrator | Sunday 08 February 2026 06:07:20 +0000 (0:00:02.452) 0:16:18.742 ******* 2026-02-08 06:07:50.082383 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-08 06:07:50.082394 | orchestrator | 2026-02-08 06:07:50.082405 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-08 06:07:50.082416 | orchestrator | Sunday 08 February 2026 06:07:21 +0000 (0:00:00.603) 0:16:19.345 ******* 2026-02-08 06:07:50.082427 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:50.082438 | orchestrator | 2026-02-08 06:07:50.082449 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-08 06:07:50.082460 | orchestrator | Sunday 08 February 2026 06:07:22 +0000 (0:00:01.040) 0:16:20.385 ******* 2026-02-08 06:07:50.082471 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:50.082482 | orchestrator | 2026-02-08 06:07:50.082493 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-08 06:07:50.082522 | orchestrator | Sunday 08 February 2026 06:07:23 +0000 (0:00:00.938) 0:16:21.324 ******* 2026-02-08 06:07:50.082542 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:07:50.082553 | orchestrator | 2026-02-08 06:07:50.082564 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-08 06:07:50.082575 | orchestrator | Sunday 08 February 2026 06:07:24 +0000 (0:00:01.533) 0:16:22.858 ******* 2026-02-08 06:07:50.082586 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.082597 | orchestrator | 2026-02-08 06:07:50.082608 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-08 06:07:50.082619 | orchestrator | Sunday 08 February 2026 06:07:24 +0000 (0:00:00.183) 0:16:23.042 ******* 2026-02-08 06:07:50.082630 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.082642 | orchestrator | 2026-02-08 06:07:50.082652 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-08 06:07:50.082663 | orchestrator | Sunday 08 February 2026 06:07:25 +0000 (0:00:00.158) 0:16:23.200 ******* 2026-02-08 06:07:50.082674 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-08 06:07:50.082686 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 06:07:50.082697 | orchestrator | 2026-02-08 06:07:50.082708 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-08 06:07:50.082719 | orchestrator | Sunday 08 February 2026 06:07:26 +0000 (0:00:00.850) 0:16:24.051 ******* 2026-02-08 06:07:50.082729 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-08 06:07:50.082741 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 06:07:50.082751 | orchestrator | 2026-02-08 06:07:50.082763 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-08 06:07:50.082795 | orchestrator | Sunday 08 February 2026 06:07:27 +0000 (0:00:01.916) 0:16:25.967 ******* 2026-02-08 06:07:50.082807 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-08 06:07:50.082819 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-08 06:07:50.082830 | orchestrator | 2026-02-08 06:07:50.082841 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-08 06:07:50.082851 | orchestrator | Sunday 08 February 2026 06:07:31 +0000 (0:00:03.462) 0:16:29.429 ******* 2026-02-08 06:07:50.082862 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.082873 | orchestrator | 2026-02-08 06:07:50.082889 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-08 06:07:50.082901 | orchestrator | Sunday 08 February 2026 06:07:31 +0000 (0:00:00.289) 0:16:29.719 ******* 2026-02-08 06:07:50.082911 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.082922 | orchestrator | 2026-02-08 06:07:50.082933 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-08 06:07:50.082944 | orchestrator | Sunday 08 February 2026 06:07:31 +0000 (0:00:00.302) 0:16:30.021 ******* 2026-02-08 06:07:50.082955 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.082966 | orchestrator | 2026-02-08 06:07:50.082977 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-08 06:07:50.082988 | orchestrator | Sunday 08 February 2026 06:07:32 +0000 (0:00:00.347) 0:16:30.369 ******* 2026-02-08 06:07:50.082998 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.083009 | orchestrator | 2026-02-08 06:07:50.083020 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-08 06:07:50.083031 | orchestrator | Sunday 08 February 2026 06:07:32 +0000 (0:00:00.133) 0:16:30.502 ******* 2026-02-08 06:07:50.083042 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.083053 | orchestrator | 2026-02-08 06:07:50.083064 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-08 06:07:50.083075 | orchestrator | Sunday 08 February 2026 06:07:32 +0000 (0:00:00.143) 0:16:30.645 ******* 2026-02-08 06:07:50.083086 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-08 06:07:50.083097 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-08 06:07:50.083109 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-08 06:07:50.083127 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-08 06:07:50.083138 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-02-08 06:07:50.083150 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:07:50.083161 | orchestrator | 2026-02-08 06:07:50.083172 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 06:07:50.083183 | orchestrator | Sunday 08 February 2026 06:07:48 +0000 (0:00:16.382) 0:16:47.028 ******* 2026-02-08 06:07:50.083194 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.083205 | orchestrator | 2026-02-08 06:07:50.083216 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-08 06:07:50.083226 | orchestrator | Sunday 08 February 2026 06:07:49 +0000 (0:00:00.529) 0:16:47.557 ******* 2026-02-08 06:07:50.083237 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.083248 | orchestrator | 2026-02-08 06:07:50.083259 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-08 06:07:50.083270 | orchestrator | Sunday 08 February 2026 06:07:49 +0000 (0:00:00.140) 0:16:47.698 ******* 2026-02-08 06:07:50.083281 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.083292 | orchestrator | 2026-02-08 06:07:50.083303 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-08 06:07:50.083314 | orchestrator | Sunday 08 February 2026 06:07:49 +0000 (0:00:00.135) 0:16:47.833 ******* 2026-02-08 06:07:50.083324 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.083335 | orchestrator | 2026-02-08 06:07:50.083346 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-08 06:07:50.083357 | orchestrator | Sunday 08 February 2026 06:07:49 +0000 (0:00:00.137) 0:16:47.971 ******* 2026-02-08 06:07:50.083368 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:50.083380 | orchestrator | 2026-02-08 06:07:50.083397 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-08 06:07:57.992872 | orchestrator | Sunday 08 February 2026 06:07:50 +0000 (0:00:00.141) 0:16:48.112 ******* 2026-02-08 06:07:57.992969 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:57.992984 | orchestrator | 2026-02-08 06:07:57.992996 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-08 06:07:57.993007 | orchestrator | Sunday 08 February 2026 06:07:50 +0000 (0:00:00.123) 0:16:48.236 ******* 2026-02-08 06:07:57.993017 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:07:57.993027 | orchestrator | 2026-02-08 06:07:57.993038 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-08 06:07:57.993048 | orchestrator | 2026-02-08 06:07:57.993058 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:07:57.993068 | orchestrator | Sunday 08 February 2026 06:07:50 +0000 (0:00:00.600) 0:16:48.837 ******* 2026-02-08 06:07:57.993078 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-08 06:07:57.993088 | orchestrator | 2026-02-08 06:07:57.993098 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:07:57.993108 | orchestrator | Sunday 08 February 2026 06:07:51 +0000 (0:00:00.277) 0:16:49.114 ******* 2026-02-08 06:07:57.993118 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:07:57.993128 | orchestrator | 2026-02-08 06:07:57.993138 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:07:57.993148 | orchestrator | Sunday 08 February 2026 06:07:51 +0000 (0:00:00.429) 0:16:49.544 ******* 2026-02-08 06:07:57.993158 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:07:57.993168 | orchestrator | 2026-02-08 06:07:57.993177 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:07:57.993187 | orchestrator | Sunday 08 February 2026 06:07:51 +0000 (0:00:00.149) 0:16:49.693 ******* 2026-02-08 06:07:57.993197 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:07:57.993207 | orchestrator | 2026-02-08 06:07:57.993239 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:07:57.993249 | orchestrator | Sunday 08 February 2026 06:07:52 +0000 (0:00:00.469) 0:16:50.163 ******* 2026-02-08 06:07:57.993285 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:07:57.993296 | orchestrator | 2026-02-08 06:07:57.993319 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:07:57.993330 | orchestrator | Sunday 08 February 2026 06:07:52 +0000 (0:00:00.462) 0:16:50.625 ******* 2026-02-08 06:07:57.993339 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:07:57.993349 | orchestrator | 2026-02-08 06:07:57.993358 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:07:57.993368 | orchestrator | Sunday 08 February 2026 06:07:52 +0000 (0:00:00.166) 0:16:50.792 ******* 2026-02-08 06:07:57.993378 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:07:57.993388 | orchestrator | 2026-02-08 06:07:57.993400 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:07:57.993412 | orchestrator | Sunday 08 February 2026 06:07:52 +0000 (0:00:00.184) 0:16:50.977 ******* 2026-02-08 06:07:57.993425 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:07:57.993437 | orchestrator | 2026-02-08 06:07:57.993448 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:07:57.993459 | orchestrator | Sunday 08 February 2026 06:07:53 +0000 (0:00:00.186) 0:16:51.163 ******* 2026-02-08 06:07:57.993471 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:07:57.993481 | orchestrator | 2026-02-08 06:07:57.993493 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:07:57.993504 | orchestrator | Sunday 08 February 2026 06:07:53 +0000 (0:00:00.160) 0:16:51.324 ******* 2026-02-08 06:07:57.993516 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:07:57.993527 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:07:57.993537 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:07:57.993546 | orchestrator | 2026-02-08 06:07:57.993556 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:07:57.993565 | orchestrator | Sunday 08 February 2026 06:07:53 +0000 (0:00:00.718) 0:16:52.043 ******* 2026-02-08 06:07:57.993575 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:07:57.993585 | orchestrator | 2026-02-08 06:07:57.993594 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:07:57.993604 | orchestrator | Sunday 08 February 2026 06:07:54 +0000 (0:00:00.283) 0:16:52.326 ******* 2026-02-08 06:07:57.993613 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:07:57.993623 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:07:57.993633 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:07:57.993643 | orchestrator | 2026-02-08 06:07:57.993653 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:07:57.993663 | orchestrator | Sunday 08 February 2026 06:07:56 +0000 (0:00:01.878) 0:16:54.204 ******* 2026-02-08 06:07:57.993672 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-08 06:07:57.993682 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-08 06:07:57.993692 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-08 06:07:57.993702 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:07:57.993712 | orchestrator | 2026-02-08 06:07:57.993721 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:07:57.993731 | orchestrator | Sunday 08 February 2026 06:07:56 +0000 (0:00:00.444) 0:16:54.649 ******* 2026-02-08 06:07:57.993742 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:07:57.993800 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:07:57.993812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:07:57.993822 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:07:57.993832 | orchestrator | 2026-02-08 06:07:57.993842 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:07:57.993852 | orchestrator | Sunday 08 February 2026 06:07:57 +0000 (0:00:00.970) 0:16:55.620 ******* 2026-02-08 06:07:57.993864 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:07:57.993881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:07:57.993892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:07:57.993902 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:07:57.993911 | orchestrator | 2026-02-08 06:07:57.993921 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:07:57.993931 | orchestrator | Sunday 08 February 2026 06:07:57 +0000 (0:00:00.182) 0:16:55.803 ******* 2026-02-08 06:07:57.993943 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:07:54.815885', 'end': '2026-02-08 06:07:54.864305', 'delta': '0:00:00.048420', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:07:57.993957 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:07:55.400200', 'end': '2026-02-08 06:07:55.453976', 'delta': '0:00:00.053776', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:07:57.993981 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:07:55.949559', 'end': '2026-02-08 06:07:55.997165', 'delta': '0:00:00.047606', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:08:02.447433 | orchestrator | 2026-02-08 06:08:02.447543 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:08:02.447559 | orchestrator | Sunday 08 February 2026 06:07:57 +0000 (0:00:00.227) 0:16:56.031 ******* 2026-02-08 06:08:02.447571 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:02.447583 | orchestrator | 2026-02-08 06:08:02.447595 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:08:02.447606 | orchestrator | Sunday 08 February 2026 06:07:58 +0000 (0:00:00.299) 0:16:56.330 ******* 2026-02-08 06:08:02.447617 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:02.447629 | orchestrator | 2026-02-08 06:08:02.447640 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:08:02.447651 | orchestrator | Sunday 08 February 2026 06:07:59 +0000 (0:00:01.014) 0:16:57.345 ******* 2026-02-08 06:08:02.447662 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:02.447673 | orchestrator | 2026-02-08 06:08:02.447684 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:08:02.447695 | orchestrator | Sunday 08 February 2026 06:07:59 +0000 (0:00:00.180) 0:16:57.525 ******* 2026-02-08 06:08:02.447706 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:08:02.447717 | orchestrator | 2026-02-08 06:08:02.447728 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:08:02.447739 | orchestrator | Sunday 08 February 2026 06:08:00 +0000 (0:00:01.022) 0:16:58.548 ******* 2026-02-08 06:08:02.447749 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:02.447760 | orchestrator | 2026-02-08 06:08:02.447771 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:08:02.447849 | orchestrator | Sunday 08 February 2026 06:08:00 +0000 (0:00:00.184) 0:16:58.733 ******* 2026-02-08 06:08:02.447861 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:02.447873 | orchestrator | 2026-02-08 06:08:02.447884 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:08:02.447895 | orchestrator | Sunday 08 February 2026 06:08:00 +0000 (0:00:00.150) 0:16:58.883 ******* 2026-02-08 06:08:02.447906 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:02.447917 | orchestrator | 2026-02-08 06:08:02.447928 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:08:02.447939 | orchestrator | Sunday 08 February 2026 06:08:01 +0000 (0:00:00.230) 0:16:59.114 ******* 2026-02-08 06:08:02.447950 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:02.447961 | orchestrator | 2026-02-08 06:08:02.447975 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:08:02.447989 | orchestrator | Sunday 08 February 2026 06:08:01 +0000 (0:00:00.124) 0:16:59.239 ******* 2026-02-08 06:08:02.448001 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:02.448014 | orchestrator | 2026-02-08 06:08:02.448028 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:08:02.448041 | orchestrator | Sunday 08 February 2026 06:08:01 +0000 (0:00:00.128) 0:16:59.367 ******* 2026-02-08 06:08:02.448054 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:02.448068 | orchestrator | 2026-02-08 06:08:02.448103 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:08:02.448117 | orchestrator | Sunday 08 February 2026 06:08:01 +0000 (0:00:00.189) 0:16:59.557 ******* 2026-02-08 06:08:02.448130 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:02.448143 | orchestrator | 2026-02-08 06:08:02.448156 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:08:02.448168 | orchestrator | Sunday 08 February 2026 06:08:01 +0000 (0:00:00.136) 0:16:59.693 ******* 2026-02-08 06:08:02.448181 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:02.448195 | orchestrator | 2026-02-08 06:08:02.448208 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:08:02.448220 | orchestrator | Sunday 08 February 2026 06:08:01 +0000 (0:00:00.195) 0:16:59.889 ******* 2026-02-08 06:08:02.448233 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:02.448246 | orchestrator | 2026-02-08 06:08:02.448260 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:08:02.448274 | orchestrator | Sunday 08 February 2026 06:08:01 +0000 (0:00:00.151) 0:17:00.041 ******* 2026-02-08 06:08:02.448287 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:02.448301 | orchestrator | 2026-02-08 06:08:02.448315 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:08:02.448326 | orchestrator | Sunday 08 February 2026 06:08:02 +0000 (0:00:00.193) 0:17:00.235 ******* 2026-02-08 06:08:02.448338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:08:02.448371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'uuids': ['4e6e3c31-d8d4-42a4-8244-dd9f1ae0f37a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm']}})  2026-02-08 06:08:02.448386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '33bf36ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:08:02.448405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc']}})  2026-02-08 06:08:02.448426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:08:02.448438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:08:02.448450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:08:02.448463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:08:02.448492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH', 'dm-uuid-CRYPT-LUKS2-d4dc753534094cc38b724c68e8894d37-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:08:02.448524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:08:03.154952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'uuids': ['d4dc7535-3409-4cc3-8b72-4c68e8894d37'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH']}})  2026-02-08 06:08:03.155082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046']}})  2026-02-08 06:08:03.155131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:08:03.155161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6152c601', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:08:03.155207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:08:03.155223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:08:03.155249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm', 'dm-uuid-CRYPT-LUKS2-4e6e3c31d8d442a48244dd9f1ae0f37a-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:08:03.155263 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:03.155277 | orchestrator | 2026-02-08 06:08:03.155289 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:08:03.155301 | orchestrator | Sunday 08 February 2026 06:08:02 +0000 (0:00:00.738) 0:17:00.973 ******* 2026-02-08 06:08:03.155314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.155327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'uuids': ['4e6e3c31-d8d4-42a4-8244-dd9f1ae0f37a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.155340 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '33bf36ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.155361 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346594 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346712 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346725 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH', 'dm-uuid-CRYPT-LUKS2-d4dc753534094cc38b724c68e8894d37-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346749 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346872 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'uuids': ['d4dc7535-3409-4cc3-8b72-4c68e8894d37'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346889 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:03.346932 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6152c601', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:12.383158 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:12.383239 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:12.383247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm', 'dm-uuid-CRYPT-LUKS2-4e6e3c31d8d442a48244dd9f1ae0f37a-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:08:12.383253 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383259 | orchestrator | 2026-02-08 06:08:12.383265 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:08:12.383270 | orchestrator | Sunday 08 February 2026 06:08:03 +0000 (0:00:00.413) 0:17:01.387 ******* 2026-02-08 06:08:12.383275 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:12.383281 | orchestrator | 2026-02-08 06:08:12.383285 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:08:12.383290 | orchestrator | Sunday 08 February 2026 06:08:03 +0000 (0:00:00.489) 0:17:01.877 ******* 2026-02-08 06:08:12.383294 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:12.383298 | orchestrator | 2026-02-08 06:08:12.383303 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:08:12.383307 | orchestrator | Sunday 08 February 2026 06:08:03 +0000 (0:00:00.138) 0:17:02.015 ******* 2026-02-08 06:08:12.383329 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:12.383334 | orchestrator | 2026-02-08 06:08:12.383338 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:08:12.383343 | orchestrator | Sunday 08 February 2026 06:08:04 +0000 (0:00:00.486) 0:17:02.502 ******* 2026-02-08 06:08:12.383347 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383351 | orchestrator | 2026-02-08 06:08:12.383356 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:08:12.383360 | orchestrator | Sunday 08 February 2026 06:08:04 +0000 (0:00:00.138) 0:17:02.640 ******* 2026-02-08 06:08:12.383365 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383379 | orchestrator | 2026-02-08 06:08:12.383383 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:08:12.383388 | orchestrator | Sunday 08 February 2026 06:08:04 +0000 (0:00:00.285) 0:17:02.926 ******* 2026-02-08 06:08:12.383392 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383396 | orchestrator | 2026-02-08 06:08:12.383401 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:08:12.383405 | orchestrator | Sunday 08 February 2026 06:08:05 +0000 (0:00:00.162) 0:17:03.089 ******* 2026-02-08 06:08:12.383409 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-08 06:08:12.383414 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-08 06:08:12.383419 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-08 06:08:12.383423 | orchestrator | 2026-02-08 06:08:12.383437 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:08:12.383442 | orchestrator | Sunday 08 February 2026 06:08:06 +0000 (0:00:01.015) 0:17:04.105 ******* 2026-02-08 06:08:12.383446 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-08 06:08:12.383451 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-08 06:08:12.383456 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-08 06:08:12.383460 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383464 | orchestrator | 2026-02-08 06:08:12.383469 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:08:12.383473 | orchestrator | Sunday 08 February 2026 06:08:06 +0000 (0:00:00.157) 0:17:04.262 ******* 2026-02-08 06:08:12.383487 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-08 06:08:12.383492 | orchestrator | 2026-02-08 06:08:12.383498 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:08:12.383504 | orchestrator | Sunday 08 February 2026 06:08:06 +0000 (0:00:00.232) 0:17:04.495 ******* 2026-02-08 06:08:12.383508 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383512 | orchestrator | 2026-02-08 06:08:12.383517 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:08:12.383521 | orchestrator | Sunday 08 February 2026 06:08:06 +0000 (0:00:00.143) 0:17:04.638 ******* 2026-02-08 06:08:12.383525 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383530 | orchestrator | 2026-02-08 06:08:12.383534 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:08:12.383538 | orchestrator | Sunday 08 February 2026 06:08:07 +0000 (0:00:00.510) 0:17:05.149 ******* 2026-02-08 06:08:12.383542 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383547 | orchestrator | 2026-02-08 06:08:12.383551 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:08:12.383555 | orchestrator | Sunday 08 February 2026 06:08:07 +0000 (0:00:00.163) 0:17:05.313 ******* 2026-02-08 06:08:12.383560 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:12.383564 | orchestrator | 2026-02-08 06:08:12.383568 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:08:12.383573 | orchestrator | Sunday 08 February 2026 06:08:07 +0000 (0:00:00.269) 0:17:05.583 ******* 2026-02-08 06:08:12.383582 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:08:12.383586 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:08:12.383591 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:08:12.383595 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383600 | orchestrator | 2026-02-08 06:08:12.383604 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:08:12.383608 | orchestrator | Sunday 08 February 2026 06:08:08 +0000 (0:00:00.488) 0:17:06.071 ******* 2026-02-08 06:08:12.383613 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:08:12.383617 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:08:12.383621 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:08:12.383626 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383630 | orchestrator | 2026-02-08 06:08:12.383634 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:08:12.383639 | orchestrator | Sunday 08 February 2026 06:08:08 +0000 (0:00:00.445) 0:17:06.516 ******* 2026-02-08 06:08:12.383643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:08:12.383647 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:08:12.383651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:08:12.383656 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:12.383660 | orchestrator | 2026-02-08 06:08:12.383664 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:08:12.383669 | orchestrator | Sunday 08 February 2026 06:08:08 +0000 (0:00:00.431) 0:17:06.948 ******* 2026-02-08 06:08:12.383673 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:12.383677 | orchestrator | 2026-02-08 06:08:12.383682 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:08:12.383686 | orchestrator | Sunday 08 February 2026 06:08:09 +0000 (0:00:00.243) 0:17:07.192 ******* 2026-02-08 06:08:12.383691 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 06:08:12.383695 | orchestrator | 2026-02-08 06:08:12.383700 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:08:12.383705 | orchestrator | Sunday 08 February 2026 06:08:09 +0000 (0:00:00.370) 0:17:07.562 ******* 2026-02-08 06:08:12.383710 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:08:12.383715 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:08:12.383721 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:08:12.383725 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:08:12.383730 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-08 06:08:12.383736 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:08:12.383742 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:08:12.383747 | orchestrator | 2026-02-08 06:08:12.383752 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:08:12.383757 | orchestrator | Sunday 08 February 2026 06:08:10 +0000 (0:00:01.168) 0:17:08.730 ******* 2026-02-08 06:08:12.383762 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:08:12.383807 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:08:12.383814 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:08:12.383818 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:08:12.383823 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-08 06:08:12.383834 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:08:12.383839 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:08:12.383845 | orchestrator | 2026-02-08 06:08:12.383853 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-08 06:08:27.841956 | orchestrator | Sunday 08 February 2026 06:08:12 +0000 (0:00:01.687) 0:17:10.418 ******* 2026-02-08 06:08:27.842114 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.842174 | orchestrator | 2026-02-08 06:08:27.842188 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-08 06:08:27.842199 | orchestrator | Sunday 08 February 2026 06:08:12 +0000 (0:00:00.470) 0:17:10.889 ******* 2026-02-08 06:08:27.842210 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.842220 | orchestrator | 2026-02-08 06:08:27.842230 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-08 06:08:27.842241 | orchestrator | Sunday 08 February 2026 06:08:12 +0000 (0:00:00.154) 0:17:11.043 ******* 2026-02-08 06:08:27.842251 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.842260 | orchestrator | 2026-02-08 06:08:27.842270 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-08 06:08:27.842281 | orchestrator | Sunday 08 February 2026 06:08:13 +0000 (0:00:00.951) 0:17:11.994 ******* 2026-02-08 06:08:27.842292 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-08 06:08:27.842305 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-08 06:08:27.842315 | orchestrator | 2026-02-08 06:08:27.842325 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:08:27.842335 | orchestrator | Sunday 08 February 2026 06:08:16 +0000 (0:00:03.001) 0:17:14.996 ******* 2026-02-08 06:08:27.842347 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-08 06:08:27.842359 | orchestrator | 2026-02-08 06:08:27.842369 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:08:27.842380 | orchestrator | Sunday 08 February 2026 06:08:17 +0000 (0:00:00.206) 0:17:15.202 ******* 2026-02-08 06:08:27.842392 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-08 06:08:27.842401 | orchestrator | 2026-02-08 06:08:27.842412 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:08:27.842422 | orchestrator | Sunday 08 February 2026 06:08:17 +0000 (0:00:00.226) 0:17:15.429 ******* 2026-02-08 06:08:27.842433 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.842443 | orchestrator | 2026-02-08 06:08:27.842454 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:08:27.842463 | orchestrator | Sunday 08 February 2026 06:08:17 +0000 (0:00:00.154) 0:17:15.584 ******* 2026-02-08 06:08:27.842474 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.842485 | orchestrator | 2026-02-08 06:08:27.842496 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:08:27.842507 | orchestrator | Sunday 08 February 2026 06:08:18 +0000 (0:00:00.508) 0:17:16.092 ******* 2026-02-08 06:08:27.842518 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.842529 | orchestrator | 2026-02-08 06:08:27.842539 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:08:27.842550 | orchestrator | Sunday 08 February 2026 06:08:18 +0000 (0:00:00.605) 0:17:16.698 ******* 2026-02-08 06:08:27.842560 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.842572 | orchestrator | 2026-02-08 06:08:27.842584 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:08:27.842596 | orchestrator | Sunday 08 February 2026 06:08:19 +0000 (0:00:00.544) 0:17:17.242 ******* 2026-02-08 06:08:27.842608 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.842619 | orchestrator | 2026-02-08 06:08:27.842631 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:08:27.842643 | orchestrator | Sunday 08 February 2026 06:08:19 +0000 (0:00:00.141) 0:17:17.383 ******* 2026-02-08 06:08:27.842682 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.842695 | orchestrator | 2026-02-08 06:08:27.842708 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:08:27.842721 | orchestrator | Sunday 08 February 2026 06:08:19 +0000 (0:00:00.146) 0:17:17.529 ******* 2026-02-08 06:08:27.842734 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.842747 | orchestrator | 2026-02-08 06:08:27.842761 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:08:27.842774 | orchestrator | Sunday 08 February 2026 06:08:19 +0000 (0:00:00.160) 0:17:17.690 ******* 2026-02-08 06:08:27.842829 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.842842 | orchestrator | 2026-02-08 06:08:27.842854 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:08:27.842864 | orchestrator | Sunday 08 February 2026 06:08:20 +0000 (0:00:00.878) 0:17:18.568 ******* 2026-02-08 06:08:27.842874 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.842883 | orchestrator | 2026-02-08 06:08:27.842893 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:08:27.842903 | orchestrator | Sunday 08 February 2026 06:08:21 +0000 (0:00:00.535) 0:17:19.104 ******* 2026-02-08 06:08:27.842913 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.842923 | orchestrator | 2026-02-08 06:08:27.842932 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:08:27.842942 | orchestrator | Sunday 08 February 2026 06:08:21 +0000 (0:00:00.153) 0:17:19.258 ******* 2026-02-08 06:08:27.842952 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.842962 | orchestrator | 2026-02-08 06:08:27.842987 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:08:27.842997 | orchestrator | Sunday 08 February 2026 06:08:21 +0000 (0:00:00.159) 0:17:19.417 ******* 2026-02-08 06:08:27.843008 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.843019 | orchestrator | 2026-02-08 06:08:27.843029 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:08:27.843038 | orchestrator | Sunday 08 February 2026 06:08:21 +0000 (0:00:00.166) 0:17:19.583 ******* 2026-02-08 06:08:27.843048 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.843058 | orchestrator | 2026-02-08 06:08:27.843067 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:08:27.843078 | orchestrator | Sunday 08 February 2026 06:08:21 +0000 (0:00:00.186) 0:17:19.769 ******* 2026-02-08 06:08:27.843089 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.843099 | orchestrator | 2026-02-08 06:08:27.843132 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:08:27.843142 | orchestrator | Sunday 08 February 2026 06:08:21 +0000 (0:00:00.169) 0:17:19.939 ******* 2026-02-08 06:08:27.843151 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843161 | orchestrator | 2026-02-08 06:08:27.843171 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:08:27.843180 | orchestrator | Sunday 08 February 2026 06:08:22 +0000 (0:00:00.144) 0:17:20.084 ******* 2026-02-08 06:08:27.843190 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843200 | orchestrator | 2026-02-08 06:08:27.843209 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:08:27.843219 | orchestrator | Sunday 08 February 2026 06:08:22 +0000 (0:00:00.151) 0:17:20.235 ******* 2026-02-08 06:08:27.843228 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843237 | orchestrator | 2026-02-08 06:08:27.843248 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:08:27.843257 | orchestrator | Sunday 08 February 2026 06:08:22 +0000 (0:00:00.169) 0:17:20.405 ******* 2026-02-08 06:08:27.843267 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.843277 | orchestrator | 2026-02-08 06:08:27.843287 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:08:27.843297 | orchestrator | Sunday 08 February 2026 06:08:22 +0000 (0:00:00.163) 0:17:20.569 ******* 2026-02-08 06:08:27.843318 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.843329 | orchestrator | 2026-02-08 06:08:27.843338 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:08:27.843348 | orchestrator | Sunday 08 February 2026 06:08:22 +0000 (0:00:00.259) 0:17:20.828 ******* 2026-02-08 06:08:27.843358 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843368 | orchestrator | 2026-02-08 06:08:27.843378 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:08:27.843388 | orchestrator | Sunday 08 February 2026 06:08:23 +0000 (0:00:00.668) 0:17:21.497 ******* 2026-02-08 06:08:27.843398 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843408 | orchestrator | 2026-02-08 06:08:27.843419 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:08:27.843428 | orchestrator | Sunday 08 February 2026 06:08:23 +0000 (0:00:00.145) 0:17:21.642 ******* 2026-02-08 06:08:27.843437 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843447 | orchestrator | 2026-02-08 06:08:27.843457 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:08:27.843468 | orchestrator | Sunday 08 February 2026 06:08:23 +0000 (0:00:00.156) 0:17:21.799 ******* 2026-02-08 06:08:27.843478 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843488 | orchestrator | 2026-02-08 06:08:27.843498 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:08:27.843508 | orchestrator | Sunday 08 February 2026 06:08:23 +0000 (0:00:00.140) 0:17:21.939 ******* 2026-02-08 06:08:27.843518 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843527 | orchestrator | 2026-02-08 06:08:27.843536 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:08:27.843546 | orchestrator | Sunday 08 February 2026 06:08:24 +0000 (0:00:00.176) 0:17:22.116 ******* 2026-02-08 06:08:27.843555 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843565 | orchestrator | 2026-02-08 06:08:27.843574 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:08:27.843584 | orchestrator | Sunday 08 February 2026 06:08:24 +0000 (0:00:00.145) 0:17:22.261 ******* 2026-02-08 06:08:27.843595 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843606 | orchestrator | 2026-02-08 06:08:27.843616 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:08:27.843627 | orchestrator | Sunday 08 February 2026 06:08:24 +0000 (0:00:00.141) 0:17:22.403 ******* 2026-02-08 06:08:27.843638 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843648 | orchestrator | 2026-02-08 06:08:27.843658 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:08:27.843668 | orchestrator | Sunday 08 February 2026 06:08:24 +0000 (0:00:00.122) 0:17:22.526 ******* 2026-02-08 06:08:27.843678 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843688 | orchestrator | 2026-02-08 06:08:27.843698 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:08:27.843707 | orchestrator | Sunday 08 February 2026 06:08:24 +0000 (0:00:00.128) 0:17:22.654 ******* 2026-02-08 06:08:27.843717 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843726 | orchestrator | 2026-02-08 06:08:27.843735 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:08:27.843745 | orchestrator | Sunday 08 February 2026 06:08:24 +0000 (0:00:00.131) 0:17:22.786 ******* 2026-02-08 06:08:27.843754 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843763 | orchestrator | 2026-02-08 06:08:27.843774 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:08:27.843814 | orchestrator | Sunday 08 February 2026 06:08:24 +0000 (0:00:00.136) 0:17:22.922 ******* 2026-02-08 06:08:27.843825 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:27.843834 | orchestrator | 2026-02-08 06:08:27.843844 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:08:27.843854 | orchestrator | Sunday 08 February 2026 06:08:25 +0000 (0:00:00.196) 0:17:23.118 ******* 2026-02-08 06:08:27.843879 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.843890 | orchestrator | 2026-02-08 06:08:27.843899 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:08:27.843909 | orchestrator | Sunday 08 February 2026 06:08:26 +0000 (0:00:00.961) 0:17:24.079 ******* 2026-02-08 06:08:27.843919 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:27.843928 | orchestrator | 2026-02-08 06:08:27.843938 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:08:27.843947 | orchestrator | Sunday 08 February 2026 06:08:27 +0000 (0:00:01.573) 0:17:25.653 ******* 2026-02-08 06:08:27.843957 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-08 06:08:27.843967 | orchestrator | 2026-02-08 06:08:27.843985 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:08:44.020741 | orchestrator | Sunday 08 February 2026 06:08:27 +0000 (0:00:00.224) 0:17:25.878 ******* 2026-02-08 06:08:44.020968 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.020998 | orchestrator | 2026-02-08 06:08:44.021013 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:08:44.021025 | orchestrator | Sunday 08 February 2026 06:08:27 +0000 (0:00:00.170) 0:17:26.049 ******* 2026-02-08 06:08:44.021036 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021048 | orchestrator | 2026-02-08 06:08:44.021059 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:08:44.021072 | orchestrator | Sunday 08 February 2026 06:08:28 +0000 (0:00:00.149) 0:17:26.198 ******* 2026-02-08 06:08:44.021092 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:08:44.021110 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:08:44.021131 | orchestrator | 2026-02-08 06:08:44.021150 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:08:44.021165 | orchestrator | Sunday 08 February 2026 06:08:28 +0000 (0:00:00.846) 0:17:27.045 ******* 2026-02-08 06:08:44.021177 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:44.021190 | orchestrator | 2026-02-08 06:08:44.021201 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:08:44.021212 | orchestrator | Sunday 08 February 2026 06:08:29 +0000 (0:00:00.481) 0:17:27.526 ******* 2026-02-08 06:08:44.021223 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021234 | orchestrator | 2026-02-08 06:08:44.021246 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:08:44.021257 | orchestrator | Sunday 08 February 2026 06:08:29 +0000 (0:00:00.145) 0:17:27.672 ******* 2026-02-08 06:08:44.021268 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021279 | orchestrator | 2026-02-08 06:08:44.021290 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:08:44.021303 | orchestrator | Sunday 08 February 2026 06:08:29 +0000 (0:00:00.164) 0:17:27.837 ******* 2026-02-08 06:08:44.021314 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021325 | orchestrator | 2026-02-08 06:08:44.021336 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:08:44.021347 | orchestrator | Sunday 08 February 2026 06:08:29 +0000 (0:00:00.127) 0:17:27.964 ******* 2026-02-08 06:08:44.021359 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-08 06:08:44.021371 | orchestrator | 2026-02-08 06:08:44.021382 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:08:44.021393 | orchestrator | Sunday 08 February 2026 06:08:30 +0000 (0:00:00.215) 0:17:28.180 ******* 2026-02-08 06:08:44.021404 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:44.021415 | orchestrator | 2026-02-08 06:08:44.021426 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:08:44.021437 | orchestrator | Sunday 08 February 2026 06:08:30 +0000 (0:00:00.735) 0:17:28.916 ******* 2026-02-08 06:08:44.021482 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:08:44.021494 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:08:44.021505 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:08:44.021516 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021527 | orchestrator | 2026-02-08 06:08:44.021538 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:08:44.021549 | orchestrator | Sunday 08 February 2026 06:08:31 +0000 (0:00:00.692) 0:17:29.608 ******* 2026-02-08 06:08:44.021560 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021571 | orchestrator | 2026-02-08 06:08:44.021582 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:08:44.021593 | orchestrator | Sunday 08 February 2026 06:08:31 +0000 (0:00:00.149) 0:17:29.757 ******* 2026-02-08 06:08:44.021604 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021615 | orchestrator | 2026-02-08 06:08:44.021625 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:08:44.021637 | orchestrator | Sunday 08 February 2026 06:08:31 +0000 (0:00:00.186) 0:17:29.944 ******* 2026-02-08 06:08:44.021647 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021659 | orchestrator | 2026-02-08 06:08:44.021670 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:08:44.021681 | orchestrator | Sunday 08 February 2026 06:08:32 +0000 (0:00:00.154) 0:17:30.098 ******* 2026-02-08 06:08:44.021692 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021702 | orchestrator | 2026-02-08 06:08:44.021713 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:08:44.021724 | orchestrator | Sunday 08 February 2026 06:08:32 +0000 (0:00:00.177) 0:17:30.276 ******* 2026-02-08 06:08:44.021735 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.021746 | orchestrator | 2026-02-08 06:08:44.021757 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:08:44.021822 | orchestrator | Sunday 08 February 2026 06:08:32 +0000 (0:00:00.168) 0:17:30.444 ******* 2026-02-08 06:08:44.021837 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:44.021848 | orchestrator | 2026-02-08 06:08:44.021859 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:08:44.021870 | orchestrator | Sunday 08 February 2026 06:08:33 +0000 (0:00:01.462) 0:17:31.907 ******* 2026-02-08 06:08:44.021881 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:44.021892 | orchestrator | 2026-02-08 06:08:44.021903 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:08:44.021914 | orchestrator | Sunday 08 February 2026 06:08:34 +0000 (0:00:00.166) 0:17:32.073 ******* 2026-02-08 06:08:44.021925 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-08 06:08:44.021936 | orchestrator | 2026-02-08 06:08:44.021971 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:08:44.021991 | orchestrator | Sunday 08 February 2026 06:08:34 +0000 (0:00:00.214) 0:17:32.288 ******* 2026-02-08 06:08:44.022013 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.022093 | orchestrator | 2026-02-08 06:08:44.022104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:08:44.022116 | orchestrator | Sunday 08 February 2026 06:08:34 +0000 (0:00:00.155) 0:17:32.443 ******* 2026-02-08 06:08:44.022127 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.022138 | orchestrator | 2026-02-08 06:08:44.022148 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:08:44.022159 | orchestrator | Sunday 08 February 2026 06:08:34 +0000 (0:00:00.178) 0:17:32.621 ******* 2026-02-08 06:08:44.022170 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.022181 | orchestrator | 2026-02-08 06:08:44.022192 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:08:44.022213 | orchestrator | Sunday 08 February 2026 06:08:34 +0000 (0:00:00.172) 0:17:32.794 ******* 2026-02-08 06:08:44.022224 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.022235 | orchestrator | 2026-02-08 06:08:44.022246 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:08:44.022257 | orchestrator | Sunday 08 February 2026 06:08:35 +0000 (0:00:00.493) 0:17:33.288 ******* 2026-02-08 06:08:44.022268 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.022279 | orchestrator | 2026-02-08 06:08:44.022290 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:08:44.022301 | orchestrator | Sunday 08 February 2026 06:08:35 +0000 (0:00:00.152) 0:17:33.441 ******* 2026-02-08 06:08:44.022312 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.022323 | orchestrator | 2026-02-08 06:08:44.022333 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:08:44.022344 | orchestrator | Sunday 08 February 2026 06:08:35 +0000 (0:00:00.179) 0:17:33.620 ******* 2026-02-08 06:08:44.022361 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.022378 | orchestrator | 2026-02-08 06:08:44.022397 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:08:44.022416 | orchestrator | Sunday 08 February 2026 06:08:35 +0000 (0:00:00.171) 0:17:33.792 ******* 2026-02-08 06:08:44.022436 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:08:44.022454 | orchestrator | 2026-02-08 06:08:44.022470 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:08:44.022482 | orchestrator | Sunday 08 February 2026 06:08:35 +0000 (0:00:00.161) 0:17:33.953 ******* 2026-02-08 06:08:44.022493 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:08:44.022504 | orchestrator | 2026-02-08 06:08:44.022515 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:08:44.022526 | orchestrator | Sunday 08 February 2026 06:08:36 +0000 (0:00:00.279) 0:17:34.233 ******* 2026-02-08 06:08:44.022537 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-08 06:08:44.022548 | orchestrator | 2026-02-08 06:08:44.022559 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:08:44.022570 | orchestrator | Sunday 08 February 2026 06:08:36 +0000 (0:00:00.251) 0:17:34.484 ******* 2026-02-08 06:08:44.022581 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-08 06:08:44.022593 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-08 06:08:44.022604 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-08 06:08:44.022614 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-08 06:08:44.022625 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-08 06:08:44.022636 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-08 06:08:44.022647 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-08 06:08:44.022658 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:08:44.022669 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:08:44.022680 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:08:44.022691 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:08:44.022702 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:08:44.022713 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:08:44.022724 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:08:44.022735 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-08 06:08:44.022746 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-08 06:08:44.022757 | orchestrator | 2026-02-08 06:08:44.022767 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:08:44.022778 | orchestrator | Sunday 08 February 2026 06:08:41 +0000 (0:00:05.537) 0:17:40.022 ******* 2026-02-08 06:08:44.022827 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-08 06:08:44.022840 | orchestrator | 2026-02-08 06:08:44.022858 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-08 06:08:44.022870 | orchestrator | Sunday 08 February 2026 06:08:42 +0000 (0:00:00.216) 0:17:40.238 ******* 2026-02-08 06:08:44.022881 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:08:44.022893 | orchestrator | 2026-02-08 06:08:44.022904 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-08 06:08:44.022915 | orchestrator | Sunday 08 February 2026 06:08:42 +0000 (0:00:00.531) 0:17:40.770 ******* 2026-02-08 06:08:44.022926 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:08:44.022937 | orchestrator | 2026-02-08 06:08:44.022958 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:09:03.446659 | orchestrator | Sunday 08 February 2026 06:08:44 +0000 (0:00:01.282) 0:17:42.053 ******* 2026-02-08 06:09:03.446756 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.446766 | orchestrator | 2026-02-08 06:09:03.446774 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:09:03.446780 | orchestrator | Sunday 08 February 2026 06:08:44 +0000 (0:00:00.142) 0:17:42.195 ******* 2026-02-08 06:09:03.446787 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.446856 | orchestrator | 2026-02-08 06:09:03.446865 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:09:03.446871 | orchestrator | Sunday 08 February 2026 06:08:44 +0000 (0:00:00.138) 0:17:42.334 ******* 2026-02-08 06:09:03.446877 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.446883 | orchestrator | 2026-02-08 06:09:03.446889 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:09:03.446896 | orchestrator | Sunday 08 February 2026 06:08:44 +0000 (0:00:00.145) 0:17:42.479 ******* 2026-02-08 06:09:03.446902 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.446908 | orchestrator | 2026-02-08 06:09:03.446914 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:09:03.446920 | orchestrator | Sunday 08 February 2026 06:08:44 +0000 (0:00:00.140) 0:17:42.619 ******* 2026-02-08 06:09:03.446926 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.446932 | orchestrator | 2026-02-08 06:09:03.446938 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:09:03.446944 | orchestrator | Sunday 08 February 2026 06:08:44 +0000 (0:00:00.129) 0:17:42.749 ******* 2026-02-08 06:09:03.446950 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.446956 | orchestrator | 2026-02-08 06:09:03.446962 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:09:03.446968 | orchestrator | Sunday 08 February 2026 06:08:44 +0000 (0:00:00.158) 0:17:42.907 ******* 2026-02-08 06:09:03.446974 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.446980 | orchestrator | 2026-02-08 06:09:03.446986 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:09:03.446992 | orchestrator | Sunday 08 February 2026 06:08:44 +0000 (0:00:00.138) 0:17:43.045 ******* 2026-02-08 06:09:03.446998 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447004 | orchestrator | 2026-02-08 06:09:03.447010 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:09:03.447016 | orchestrator | Sunday 08 February 2026 06:08:45 +0000 (0:00:00.142) 0:17:43.188 ******* 2026-02-08 06:09:03.447021 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447027 | orchestrator | 2026-02-08 06:09:03.447033 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:09:03.447055 | orchestrator | Sunday 08 February 2026 06:08:45 +0000 (0:00:00.141) 0:17:43.330 ******* 2026-02-08 06:09:03.447061 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447067 | orchestrator | 2026-02-08 06:09:03.447073 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:09:03.447079 | orchestrator | Sunday 08 February 2026 06:08:45 +0000 (0:00:00.134) 0:17:43.464 ******* 2026-02-08 06:09:03.447085 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:03.447092 | orchestrator | 2026-02-08 06:09:03.447097 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:09:03.447103 | orchestrator | Sunday 08 February 2026 06:08:45 +0000 (0:00:00.197) 0:17:43.662 ******* 2026-02-08 06:09:03.447109 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-08 06:09:03.447115 | orchestrator | 2026-02-08 06:09:03.447121 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:09:03.447127 | orchestrator | Sunday 08 February 2026 06:08:49 +0000 (0:00:03.449) 0:17:47.112 ******* 2026-02-08 06:09:03.447133 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:09:03.447141 | orchestrator | 2026-02-08 06:09:03.447147 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:09:03.447152 | orchestrator | Sunday 08 February 2026 06:08:49 +0000 (0:00:00.194) 0:17:47.306 ******* 2026-02-08 06:09:03.447160 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-08 06:09:03.447180 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-08 06:09:03.447187 | orchestrator | 2026-02-08 06:09:03.447193 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:09:03.447199 | orchestrator | Sunday 08 February 2026 06:08:56 +0000 (0:00:07.573) 0:17:54.880 ******* 2026-02-08 06:09:03.447205 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447211 | orchestrator | 2026-02-08 06:09:03.447218 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:09:03.447225 | orchestrator | Sunday 08 February 2026 06:08:56 +0000 (0:00:00.155) 0:17:55.035 ******* 2026-02-08 06:09:03.447232 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447239 | orchestrator | 2026-02-08 06:09:03.447258 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:09:03.447266 | orchestrator | Sunday 08 February 2026 06:08:57 +0000 (0:00:00.154) 0:17:55.190 ******* 2026-02-08 06:09:03.447273 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447280 | orchestrator | 2026-02-08 06:09:03.447288 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:09:03.447296 | orchestrator | Sunday 08 February 2026 06:08:57 +0000 (0:00:00.174) 0:17:55.364 ******* 2026-02-08 06:09:03.447303 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447310 | orchestrator | 2026-02-08 06:09:03.447316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:09:03.447322 | orchestrator | Sunday 08 February 2026 06:08:57 +0000 (0:00:00.207) 0:17:55.572 ******* 2026-02-08 06:09:03.447327 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447333 | orchestrator | 2026-02-08 06:09:03.447339 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:09:03.447345 | orchestrator | Sunday 08 February 2026 06:08:57 +0000 (0:00:00.154) 0:17:55.726 ******* 2026-02-08 06:09:03.447355 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:03.447361 | orchestrator | 2026-02-08 06:09:03.447367 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:09:03.447373 | orchestrator | Sunday 08 February 2026 06:08:57 +0000 (0:00:00.273) 0:17:55.999 ******* 2026-02-08 06:09:03.447378 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:09:03.447384 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:09:03.447390 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:09:03.447396 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447402 | orchestrator | 2026-02-08 06:09:03.447408 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:09:03.447413 | orchestrator | Sunday 08 February 2026 06:08:58 +0000 (0:00:00.501) 0:17:56.500 ******* 2026-02-08 06:09:03.447419 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:09:03.447425 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:09:03.447431 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:09:03.447436 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447442 | orchestrator | 2026-02-08 06:09:03.447448 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:09:03.447454 | orchestrator | Sunday 08 February 2026 06:08:58 +0000 (0:00:00.457) 0:17:56.958 ******* 2026-02-08 06:09:03.447459 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:09:03.447465 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:09:03.447471 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:09:03.447477 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447482 | orchestrator | 2026-02-08 06:09:03.447488 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:09:03.447494 | orchestrator | Sunday 08 February 2026 06:08:59 +0000 (0:00:00.468) 0:17:57.427 ******* 2026-02-08 06:09:03.447500 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:03.447506 | orchestrator | 2026-02-08 06:09:03.447511 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:09:03.447517 | orchestrator | Sunday 08 February 2026 06:08:59 +0000 (0:00:00.185) 0:17:57.612 ******* 2026-02-08 06:09:03.447523 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 06:09:03.447529 | orchestrator | 2026-02-08 06:09:03.447535 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:09:03.447541 | orchestrator | Sunday 08 February 2026 06:09:00 +0000 (0:00:00.448) 0:17:58.061 ******* 2026-02-08 06:09:03.447546 | orchestrator | changed: [testbed-node-4] 2026-02-08 06:09:03.447552 | orchestrator | 2026-02-08 06:09:03.447558 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-08 06:09:03.447564 | orchestrator | Sunday 08 February 2026 06:09:01 +0000 (0:00:01.669) 0:17:59.730 ******* 2026-02-08 06:09:03.447569 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:03.447575 | orchestrator | 2026-02-08 06:09:03.447581 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-08 06:09:03.447587 | orchestrator | Sunday 08 February 2026 06:09:01 +0000 (0:00:00.153) 0:17:59.884 ******* 2026-02-08 06:09:03.447592 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:09:03.447599 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:09:03.447605 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:09:03.447610 | orchestrator | 2026-02-08 06:09:03.447616 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-08 06:09:03.447622 | orchestrator | Sunday 08 February 2026 06:09:02 +0000 (0:00:00.676) 0:18:00.561 ******* 2026-02-08 06:09:03.447628 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-08 06:09:03.447639 | orchestrator | 2026-02-08 06:09:03.447648 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-08 06:09:03.447654 | orchestrator | Sunday 08 February 2026 06:09:02 +0000 (0:00:00.196) 0:18:00.757 ******* 2026-02-08 06:09:03.447659 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447665 | orchestrator | 2026-02-08 06:09:03.447671 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-08 06:09:03.447677 | orchestrator | Sunday 08 February 2026 06:09:02 +0000 (0:00:00.124) 0:18:00.881 ******* 2026-02-08 06:09:03.447683 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:03.447688 | orchestrator | 2026-02-08 06:09:03.447694 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-08 06:09:03.447700 | orchestrator | Sunday 08 February 2026 06:09:02 +0000 (0:00:00.145) 0:18:01.027 ******* 2026-02-08 06:09:03.447706 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:03.447711 | orchestrator | 2026-02-08 06:09:03.447720 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-08 06:09:42.907545 | orchestrator | Sunday 08 February 2026 06:09:03 +0000 (0:00:00.458) 0:18:01.485 ******* 2026-02-08 06:09:42.907691 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:42.907719 | orchestrator | 2026-02-08 06:09:42.907743 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-08 06:09:42.907764 | orchestrator | Sunday 08 February 2026 06:09:03 +0000 (0:00:00.161) 0:18:01.647 ******* 2026-02-08 06:09:42.907785 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-08 06:09:42.907875 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-08 06:09:42.907902 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-08 06:09:42.907923 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-08 06:09:42.907943 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-08 06:09:42.907964 | orchestrator | 2026-02-08 06:09:42.907984 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-08 06:09:42.908004 | orchestrator | Sunday 08 February 2026 06:09:05 +0000 (0:00:01.833) 0:18:03.481 ******* 2026-02-08 06:09:42.908025 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.908046 | orchestrator | 2026-02-08 06:09:42.908067 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-08 06:09:42.908088 | orchestrator | Sunday 08 February 2026 06:09:05 +0000 (0:00:00.181) 0:18:03.662 ******* 2026-02-08 06:09:42.908108 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-08 06:09:42.908129 | orchestrator | 2026-02-08 06:09:42.908149 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-08 06:09:42.908169 | orchestrator | Sunday 08 February 2026 06:09:06 +0000 (0:00:00.536) 0:18:04.199 ******* 2026-02-08 06:09:42.908190 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-08 06:09:42.908210 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-08 06:09:42.908230 | orchestrator | 2026-02-08 06:09:42.908250 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-08 06:09:42.908271 | orchestrator | Sunday 08 February 2026 06:09:06 +0000 (0:00:00.835) 0:18:05.034 ******* 2026-02-08 06:09:42.908291 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:09:42.908312 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-08 06:09:42.908333 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:09:42.908353 | orchestrator | 2026-02-08 06:09:42.908374 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:09:42.908395 | orchestrator | Sunday 08 February 2026 06:09:09 +0000 (0:00:02.237) 0:18:07.272 ******* 2026-02-08 06:09:42.908413 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-08 06:09:42.908462 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-08 06:09:42.908481 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:42.908499 | orchestrator | 2026-02-08 06:09:42.908517 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-08 06:09:42.908535 | orchestrator | Sunday 08 February 2026 06:09:10 +0000 (0:00:00.937) 0:18:08.209 ******* 2026-02-08 06:09:42.908553 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.908571 | orchestrator | 2026-02-08 06:09:42.908589 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-08 06:09:42.908607 | orchestrator | Sunday 08 February 2026 06:09:10 +0000 (0:00:00.285) 0:18:08.495 ******* 2026-02-08 06:09:42.908625 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.908643 | orchestrator | 2026-02-08 06:09:42.908661 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-08 06:09:42.908678 | orchestrator | Sunday 08 February 2026 06:09:10 +0000 (0:00:00.154) 0:18:08.649 ******* 2026-02-08 06:09:42.908693 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.908708 | orchestrator | 2026-02-08 06:09:42.908723 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-08 06:09:42.908739 | orchestrator | Sunday 08 February 2026 06:09:10 +0000 (0:00:00.142) 0:18:08.792 ******* 2026-02-08 06:09:42.908754 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-08 06:09:42.908769 | orchestrator | 2026-02-08 06:09:42.908785 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-08 06:09:42.908800 | orchestrator | Sunday 08 February 2026 06:09:10 +0000 (0:00:00.221) 0:18:09.013 ******* 2026-02-08 06:09:42.908844 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:42.908862 | orchestrator | 2026-02-08 06:09:42.908879 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-08 06:09:42.908897 | orchestrator | Sunday 08 February 2026 06:09:11 +0000 (0:00:00.463) 0:18:09.476 ******* 2026-02-08 06:09:42.908914 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:42.908931 | orchestrator | 2026-02-08 06:09:42.908948 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-08 06:09:42.908984 | orchestrator | Sunday 08 February 2026 06:09:13 +0000 (0:00:02.318) 0:18:11.794 ******* 2026-02-08 06:09:42.909004 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-08 06:09:42.909020 | orchestrator | 2026-02-08 06:09:42.909037 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-08 06:09:42.909054 | orchestrator | Sunday 08 February 2026 06:09:13 +0000 (0:00:00.247) 0:18:12.042 ******* 2026-02-08 06:09:42.909072 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:42.909090 | orchestrator | 2026-02-08 06:09:42.909106 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-08 06:09:42.909124 | orchestrator | Sunday 08 February 2026 06:09:15 +0000 (0:00:01.337) 0:18:13.379 ******* 2026-02-08 06:09:42.909142 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:42.909160 | orchestrator | 2026-02-08 06:09:42.909178 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-08 06:09:42.909223 | orchestrator | Sunday 08 February 2026 06:09:16 +0000 (0:00:00.957) 0:18:14.337 ******* 2026-02-08 06:09:42.909243 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:09:42.909261 | orchestrator | 2026-02-08 06:09:42.909279 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-08 06:09:42.909297 | orchestrator | Sunday 08 February 2026 06:09:17 +0000 (0:00:01.260) 0:18:15.597 ******* 2026-02-08 06:09:42.909315 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.909333 | orchestrator | 2026-02-08 06:09:42.909351 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-08 06:09:42.909368 | orchestrator | Sunday 08 February 2026 06:09:17 +0000 (0:00:00.145) 0:18:15.743 ******* 2026-02-08 06:09:42.909386 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.909404 | orchestrator | 2026-02-08 06:09:42.909437 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-08 06:09:42.909456 | orchestrator | Sunday 08 February 2026 06:09:17 +0000 (0:00:00.172) 0:18:15.916 ******* 2026-02-08 06:09:42.909474 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-08 06:09:42.909492 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-08 06:09:42.909510 | orchestrator | 2026-02-08 06:09:42.909529 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-08 06:09:42.909547 | orchestrator | Sunday 08 February 2026 06:09:18 +0000 (0:00:00.866) 0:18:16.782 ******* 2026-02-08 06:09:42.909565 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-08 06:09:42.909583 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-08 06:09:42.909601 | orchestrator | 2026-02-08 06:09:42.909619 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-08 06:09:42.909637 | orchestrator | Sunday 08 February 2026 06:09:20 +0000 (0:00:01.891) 0:18:18.674 ******* 2026-02-08 06:09:42.909655 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-08 06:09:42.909674 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-08 06:09:42.909692 | orchestrator | 2026-02-08 06:09:42.909710 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-08 06:09:42.909728 | orchestrator | Sunday 08 February 2026 06:09:24 +0000 (0:00:03.619) 0:18:22.294 ******* 2026-02-08 06:09:42.909744 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.909759 | orchestrator | 2026-02-08 06:09:42.909775 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-08 06:09:42.909791 | orchestrator | Sunday 08 February 2026 06:09:24 +0000 (0:00:00.252) 0:18:22.546 ******* 2026-02-08 06:09:42.909840 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.909858 | orchestrator | 2026-02-08 06:09:42.909875 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-08 06:09:42.909892 | orchestrator | Sunday 08 February 2026 06:09:24 +0000 (0:00:00.246) 0:18:22.793 ******* 2026-02-08 06:09:42.909908 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.909924 | orchestrator | 2026-02-08 06:09:42.909940 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-08 06:09:42.909957 | orchestrator | Sunday 08 February 2026 06:09:25 +0000 (0:00:00.305) 0:18:23.098 ******* 2026-02-08 06:09:42.909973 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.909988 | orchestrator | 2026-02-08 06:09:42.910003 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-08 06:09:42.910083 | orchestrator | Sunday 08 February 2026 06:09:25 +0000 (0:00:00.141) 0:18:23.239 ******* 2026-02-08 06:09:42.910113 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.910129 | orchestrator | 2026-02-08 06:09:42.910145 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-08 06:09:42.910161 | orchestrator | Sunday 08 February 2026 06:09:25 +0000 (0:00:00.527) 0:18:23.766 ******* 2026-02-08 06:09:42.910177 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-08 06:09:42.910194 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-08 06:09:42.910211 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-08 06:09:42.910227 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-08 06:09:42.910244 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-02-08 06:09:42.910260 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:09:42.910277 | orchestrator | 2026-02-08 06:09:42.910294 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 06:09:42.910310 | orchestrator | Sunday 08 February 2026 06:09:42 +0000 (0:00:16.454) 0:18:40.221 ******* 2026-02-08 06:09:42.910326 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.910356 | orchestrator | 2026-02-08 06:09:42.910372 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-08 06:09:42.910388 | orchestrator | Sunday 08 February 2026 06:09:42 +0000 (0:00:00.157) 0:18:40.378 ******* 2026-02-08 06:09:42.910403 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.910416 | orchestrator | 2026-02-08 06:09:42.910439 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-08 06:09:42.910452 | orchestrator | Sunday 08 February 2026 06:09:42 +0000 (0:00:00.143) 0:18:40.521 ******* 2026-02-08 06:09:42.910466 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.910479 | orchestrator | 2026-02-08 06:09:42.910493 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-08 06:09:42.910507 | orchestrator | Sunday 08 February 2026 06:09:42 +0000 (0:00:00.141) 0:18:40.663 ******* 2026-02-08 06:09:42.910520 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.910533 | orchestrator | 2026-02-08 06:09:42.910541 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-08 06:09:42.910549 | orchestrator | Sunday 08 February 2026 06:09:42 +0000 (0:00:00.130) 0:18:40.793 ******* 2026-02-08 06:09:42.910557 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:42.910565 | orchestrator | 2026-02-08 06:09:42.910587 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-08 06:09:51.646254 | orchestrator | Sunday 08 February 2026 06:09:42 +0000 (0:00:00.152) 0:18:40.945 ******* 2026-02-08 06:09:51.646330 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:51.646337 | orchestrator | 2026-02-08 06:09:51.646342 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-08 06:09:51.646347 | orchestrator | Sunday 08 February 2026 06:09:43 +0000 (0:00:00.128) 0:18:41.074 ******* 2026-02-08 06:09:51.646351 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:09:51.646355 | orchestrator | 2026-02-08 06:09:51.646360 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-08 06:09:51.646364 | orchestrator | 2026-02-08 06:09:51.646368 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:09:51.646372 | orchestrator | Sunday 08 February 2026 06:09:43 +0000 (0:00:00.570) 0:18:41.644 ******* 2026-02-08 06:09:51.646376 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-08 06:09:51.646380 | orchestrator | 2026-02-08 06:09:51.646384 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:09:51.646388 | orchestrator | Sunday 08 February 2026 06:09:43 +0000 (0:00:00.248) 0:18:41.893 ******* 2026-02-08 06:09:51.646392 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:51.646396 | orchestrator | 2026-02-08 06:09:51.646400 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:09:51.646404 | orchestrator | Sunday 08 February 2026 06:09:44 +0000 (0:00:00.829) 0:18:42.722 ******* 2026-02-08 06:09:51.646408 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:51.646411 | orchestrator | 2026-02-08 06:09:51.646415 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:09:51.646419 | orchestrator | Sunday 08 February 2026 06:09:44 +0000 (0:00:00.139) 0:18:42.861 ******* 2026-02-08 06:09:51.646423 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:51.646426 | orchestrator | 2026-02-08 06:09:51.646430 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:09:51.646434 | orchestrator | Sunday 08 February 2026 06:09:45 +0000 (0:00:00.441) 0:18:43.303 ******* 2026-02-08 06:09:51.646438 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:51.646442 | orchestrator | 2026-02-08 06:09:51.646445 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:09:51.646449 | orchestrator | Sunday 08 February 2026 06:09:45 +0000 (0:00:00.158) 0:18:43.462 ******* 2026-02-08 06:09:51.646453 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:51.646457 | orchestrator | 2026-02-08 06:09:51.646460 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:09:51.646479 | orchestrator | Sunday 08 February 2026 06:09:45 +0000 (0:00:00.152) 0:18:43.614 ******* 2026-02-08 06:09:51.646483 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:51.646487 | orchestrator | 2026-02-08 06:09:51.646491 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:09:51.646495 | orchestrator | Sunday 08 February 2026 06:09:45 +0000 (0:00:00.167) 0:18:43.782 ******* 2026-02-08 06:09:51.646499 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:51.646503 | orchestrator | 2026-02-08 06:09:51.646506 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:09:51.646510 | orchestrator | Sunday 08 February 2026 06:09:45 +0000 (0:00:00.168) 0:18:43.951 ******* 2026-02-08 06:09:51.646514 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:51.646518 | orchestrator | 2026-02-08 06:09:51.646521 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:09:51.646525 | orchestrator | Sunday 08 February 2026 06:09:46 +0000 (0:00:00.164) 0:18:44.115 ******* 2026-02-08 06:09:51.646529 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:09:51.646533 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:09:51.646537 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:09:51.646540 | orchestrator | 2026-02-08 06:09:51.646544 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:09:51.646548 | orchestrator | Sunday 08 February 2026 06:09:47 +0000 (0:00:00.994) 0:18:45.110 ******* 2026-02-08 06:09:51.646552 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:51.646555 | orchestrator | 2026-02-08 06:09:51.646559 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:09:51.646563 | orchestrator | Sunday 08 February 2026 06:09:47 +0000 (0:00:00.262) 0:18:45.372 ******* 2026-02-08 06:09:51.646567 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:09:51.646570 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:09:51.646574 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:09:51.646578 | orchestrator | 2026-02-08 06:09:51.646582 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:09:51.646595 | orchestrator | Sunday 08 February 2026 06:09:49 +0000 (0:00:02.200) 0:18:47.573 ******* 2026-02-08 06:09:51.646599 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 06:09:51.646603 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 06:09:51.646607 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 06:09:51.646611 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:51.646614 | orchestrator | 2026-02-08 06:09:51.646618 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:09:51.646622 | orchestrator | Sunday 08 February 2026 06:09:49 +0000 (0:00:00.431) 0:18:48.004 ******* 2026-02-08 06:09:51.646626 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:09:51.646642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:09:51.646646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:09:51.646650 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:51.646654 | orchestrator | 2026-02-08 06:09:51.646662 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:09:51.646666 | orchestrator | Sunday 08 February 2026 06:09:50 +0000 (0:00:00.969) 0:18:48.974 ******* 2026-02-08 06:09:51.646671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:51.646678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:51.646682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:51.646686 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:51.646690 | orchestrator | 2026-02-08 06:09:51.646693 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:09:51.646697 | orchestrator | Sunday 08 February 2026 06:09:51 +0000 (0:00:00.175) 0:18:49.149 ******* 2026-02-08 06:09:51.646702 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:09:47.825329', 'end': '2026-02-08 06:09:47.882888', 'delta': '0:00:00.057559', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:09:51.646712 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:09:48.401883', 'end': '2026-02-08 06:09:48.451414', 'delta': '0:00:00.049531', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:09:51.646719 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:09:49.326452', 'end': '2026-02-08 06:09:49.375560', 'delta': '0:00:00.049108', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:09:55.672886 | orchestrator | 2026-02-08 06:09:55.672979 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:09:55.672993 | orchestrator | Sunday 08 February 2026 06:09:51 +0000 (0:00:00.537) 0:18:49.687 ******* 2026-02-08 06:09:55.673002 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:55.673014 | orchestrator | 2026-02-08 06:09:55.673028 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:09:55.673044 | orchestrator | Sunday 08 February 2026 06:09:51 +0000 (0:00:00.289) 0:18:49.977 ******* 2026-02-08 06:09:55.673057 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:55.673072 | orchestrator | 2026-02-08 06:09:55.673086 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:09:55.673100 | orchestrator | Sunday 08 February 2026 06:09:52 +0000 (0:00:00.255) 0:18:50.232 ******* 2026-02-08 06:09:55.673112 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:55.673120 | orchestrator | 2026-02-08 06:09:55.673128 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:09:55.673136 | orchestrator | Sunday 08 February 2026 06:09:52 +0000 (0:00:00.150) 0:18:50.382 ******* 2026-02-08 06:09:55.673144 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:09:55.673152 | orchestrator | 2026-02-08 06:09:55.673161 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:09:55.673169 | orchestrator | Sunday 08 February 2026 06:09:53 +0000 (0:00:01.064) 0:18:51.447 ******* 2026-02-08 06:09:55.673177 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:55.673185 | orchestrator | 2026-02-08 06:09:55.673192 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:09:55.673200 | orchestrator | Sunday 08 February 2026 06:09:53 +0000 (0:00:00.166) 0:18:51.613 ******* 2026-02-08 06:09:55.673208 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:55.673216 | orchestrator | 2026-02-08 06:09:55.673224 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:09:55.673232 | orchestrator | Sunday 08 February 2026 06:09:53 +0000 (0:00:00.136) 0:18:51.749 ******* 2026-02-08 06:09:55.673240 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:55.673247 | orchestrator | 2026-02-08 06:09:55.673255 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:09:55.673263 | orchestrator | Sunday 08 February 2026 06:09:53 +0000 (0:00:00.251) 0:18:52.001 ******* 2026-02-08 06:09:55.673271 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:55.673279 | orchestrator | 2026-02-08 06:09:55.673287 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:09:55.673295 | orchestrator | Sunday 08 February 2026 06:09:54 +0000 (0:00:00.178) 0:18:52.179 ******* 2026-02-08 06:09:55.673303 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:55.673311 | orchestrator | 2026-02-08 06:09:55.673318 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:09:55.673326 | orchestrator | Sunday 08 February 2026 06:09:54 +0000 (0:00:00.167) 0:18:52.347 ******* 2026-02-08 06:09:55.673334 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:55.673342 | orchestrator | 2026-02-08 06:09:55.673350 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:09:55.673358 | orchestrator | Sunday 08 February 2026 06:09:54 +0000 (0:00:00.184) 0:18:52.532 ******* 2026-02-08 06:09:55.673367 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:55.673377 | orchestrator | 2026-02-08 06:09:55.673386 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:09:55.673396 | orchestrator | Sunday 08 February 2026 06:09:54 +0000 (0:00:00.129) 0:18:52.662 ******* 2026-02-08 06:09:55.673405 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:55.673414 | orchestrator | 2026-02-08 06:09:55.673423 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:09:55.673454 | orchestrator | Sunday 08 February 2026 06:09:54 +0000 (0:00:00.173) 0:18:52.835 ******* 2026-02-08 06:09:55.673464 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:55.673474 | orchestrator | 2026-02-08 06:09:55.673483 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:09:55.673493 | orchestrator | Sunday 08 February 2026 06:09:55 +0000 (0:00:00.458) 0:18:53.294 ******* 2026-02-08 06:09:55.673503 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:09:55.673513 | orchestrator | 2026-02-08 06:09:55.673522 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:09:55.673533 | orchestrator | Sunday 08 February 2026 06:09:55 +0000 (0:00:00.192) 0:18:53.486 ******* 2026-02-08 06:09:55.673558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:09:55.673587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'uuids': ['1cd382bb-e631-457f-8e23-7bd2a48df188'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx']}})  2026-02-08 06:09:55.673598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '380fccde', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:09:55.673608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a']}})  2026-02-08 06:09:55.673617 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:09:55.673626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:09:55.673641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:09:55.673655 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:09:55.673663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM', 'dm-uuid-CRYPT-LUKS2-acd8aeb3a91948899ba0cb5b1d4bdfb1-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:09:55.673679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:09:56.016501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'uuids': ['acd8aeb3-a919-4889-9ba0-cb5b1d4bdfb1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM']}})  2026-02-08 06:09:56.016609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307']}})  2026-02-08 06:09:56.016628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:09:56.016692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b6d2541', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:09:56.016741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:09:56.016763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:09:56.016783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx', 'dm-uuid-CRYPT-LUKS2-1cd382bbe631457f8e237bd2a48df188-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:09:56.016804 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:09:56.016888 | orchestrator | 2026-02-08 06:09:56.016912 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:09:56.016924 | orchestrator | Sunday 08 February 2026 06:09:55 +0000 (0:00:00.362) 0:18:53.849 ******* 2026-02-08 06:09:56.016937 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.016958 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'uuids': ['1cd382bb-e631-457f-8e23-7bd2a48df188'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.016971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '380fccde', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.016995 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.214984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.215107 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.215124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.215151 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.215163 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM', 'dm-uuid-CRYPT-LUKS2-acd8aeb3a91948899ba0cb5b1d4bdfb1-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.215175 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.215207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'uuids': ['acd8aeb3-a919-4889-9ba0-cb5b1d4bdfb1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.215230 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.215249 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:09:56.215271 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b6d2541', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:10:05.172147 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:10:05.172286 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:10:05.172324 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx', 'dm-uuid-CRYPT-LUKS2-1cd382bbe631457f8e237bd2a48df188-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:10:05.172340 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.172354 | orchestrator | 2026-02-08 06:10:05.172367 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:10:05.172380 | orchestrator | Sunday 08 February 2026 06:09:56 +0000 (0:00:00.409) 0:18:54.259 ******* 2026-02-08 06:10:05.172391 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:05.172402 | orchestrator | 2026-02-08 06:10:05.172414 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:10:05.172425 | orchestrator | Sunday 08 February 2026 06:09:56 +0000 (0:00:00.498) 0:18:54.757 ******* 2026-02-08 06:10:05.172436 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:05.172447 | orchestrator | 2026-02-08 06:10:05.172457 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:10:05.172468 | orchestrator | Sunday 08 February 2026 06:09:56 +0000 (0:00:00.139) 0:18:54.896 ******* 2026-02-08 06:10:05.172479 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:05.172490 | orchestrator | 2026-02-08 06:10:05.172501 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:10:05.172512 | orchestrator | Sunday 08 February 2026 06:09:57 +0000 (0:00:00.471) 0:18:55.367 ******* 2026-02-08 06:10:05.172523 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.172534 | orchestrator | 2026-02-08 06:10:05.172545 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:10:05.172556 | orchestrator | Sunday 08 February 2026 06:09:57 +0000 (0:00:00.140) 0:18:55.508 ******* 2026-02-08 06:10:05.172590 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.172602 | orchestrator | 2026-02-08 06:10:05.172613 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:10:05.172623 | orchestrator | Sunday 08 February 2026 06:09:57 +0000 (0:00:00.233) 0:18:55.742 ******* 2026-02-08 06:10:05.172634 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.172645 | orchestrator | 2026-02-08 06:10:05.172656 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:10:05.172667 | orchestrator | Sunday 08 February 2026 06:09:57 +0000 (0:00:00.147) 0:18:55.889 ******* 2026-02-08 06:10:05.172681 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-08 06:10:05.172694 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-08 06:10:05.172707 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-08 06:10:05.172720 | orchestrator | 2026-02-08 06:10:05.172733 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:10:05.172747 | orchestrator | Sunday 08 February 2026 06:09:58 +0000 (0:00:01.033) 0:18:56.923 ******* 2026-02-08 06:10:05.172760 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 06:10:05.172773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 06:10:05.172786 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 06:10:05.172798 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.172843 | orchestrator | 2026-02-08 06:10:05.172862 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:10:05.172874 | orchestrator | Sunday 08 February 2026 06:09:59 +0000 (0:00:00.194) 0:18:57.118 ******* 2026-02-08 06:10:05.172903 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-08 06:10:05.172915 | orchestrator | 2026-02-08 06:10:05.172927 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:10:05.172940 | orchestrator | Sunday 08 February 2026 06:09:59 +0000 (0:00:00.536) 0:18:57.654 ******* 2026-02-08 06:10:05.172951 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.172963 | orchestrator | 2026-02-08 06:10:05.172974 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:10:05.172985 | orchestrator | Sunday 08 February 2026 06:09:59 +0000 (0:00:00.167) 0:18:57.822 ******* 2026-02-08 06:10:05.172996 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.173007 | orchestrator | 2026-02-08 06:10:05.173018 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:10:05.173029 | orchestrator | Sunday 08 February 2026 06:09:59 +0000 (0:00:00.163) 0:18:57.985 ******* 2026-02-08 06:10:05.173040 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.173051 | orchestrator | 2026-02-08 06:10:05.173062 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:10:05.173073 | orchestrator | Sunday 08 February 2026 06:10:00 +0000 (0:00:00.142) 0:18:58.128 ******* 2026-02-08 06:10:05.173084 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:05.173095 | orchestrator | 2026-02-08 06:10:05.173106 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:10:05.173117 | orchestrator | Sunday 08 February 2026 06:10:00 +0000 (0:00:00.260) 0:18:58.389 ******* 2026-02-08 06:10:05.173128 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:10:05.173139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:10:05.173150 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:10:05.173161 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.173172 | orchestrator | 2026-02-08 06:10:05.173183 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:10:05.173201 | orchestrator | Sunday 08 February 2026 06:10:00 +0000 (0:00:00.466) 0:18:58.856 ******* 2026-02-08 06:10:05.173212 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:10:05.173232 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:10:05.173243 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:10:05.173254 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.173265 | orchestrator | 2026-02-08 06:10:05.173276 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:10:05.173287 | orchestrator | Sunday 08 February 2026 06:10:01 +0000 (0:00:00.486) 0:18:59.342 ******* 2026-02-08 06:10:05.173298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:10:05.173309 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:10:05.173320 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:10:05.173331 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:05.173342 | orchestrator | 2026-02-08 06:10:05.173353 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:10:05.173365 | orchestrator | Sunday 08 February 2026 06:10:01 +0000 (0:00:00.398) 0:18:59.741 ******* 2026-02-08 06:10:05.173375 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:05.173386 | orchestrator | 2026-02-08 06:10:05.173398 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:10:05.173409 | orchestrator | Sunday 08 February 2026 06:10:01 +0000 (0:00:00.168) 0:18:59.910 ******* 2026-02-08 06:10:05.173419 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 06:10:05.173431 | orchestrator | 2026-02-08 06:10:05.173442 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:10:05.173453 | orchestrator | Sunday 08 February 2026 06:10:02 +0000 (0:00:00.326) 0:19:00.237 ******* 2026-02-08 06:10:05.173464 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:10:05.173475 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:10:05.173486 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:10:05.173498 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:10:05.173517 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:10:05.173533 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-08 06:10:05.173552 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:10:05.173572 | orchestrator | 2026-02-08 06:10:05.173591 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:10:05.173604 | orchestrator | Sunday 08 February 2026 06:10:03 +0000 (0:00:01.155) 0:19:01.393 ******* 2026-02-08 06:10:05.173615 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:10:05.173626 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:10:05.173637 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:10:05.173648 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:10:05.173658 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:10:05.173669 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-08 06:10:05.173680 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:10:05.173691 | orchestrator | 2026-02-08 06:10:05.173710 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-08 06:10:19.934758 | orchestrator | Sunday 08 February 2026 06:10:05 +0000 (0:00:01.812) 0:19:03.206 ******* 2026-02-08 06:10:19.934975 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.934999 | orchestrator | 2026-02-08 06:10:19.935013 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-08 06:10:19.935025 | orchestrator | Sunday 08 February 2026 06:10:06 +0000 (0:00:00.864) 0:19:04.070 ******* 2026-02-08 06:10:19.935060 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.935072 | orchestrator | 2026-02-08 06:10:19.935083 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-08 06:10:19.935094 | orchestrator | Sunday 08 February 2026 06:10:06 +0000 (0:00:00.148) 0:19:04.219 ******* 2026-02-08 06:10:19.935105 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.935115 | orchestrator | 2026-02-08 06:10:19.935126 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-08 06:10:19.935137 | orchestrator | Sunday 08 February 2026 06:10:06 +0000 (0:00:00.258) 0:19:04.477 ******* 2026-02-08 06:10:19.935148 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-08 06:10:19.935161 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-08 06:10:19.935171 | orchestrator | 2026-02-08 06:10:19.935182 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:10:19.935193 | orchestrator | Sunday 08 February 2026 06:10:09 +0000 (0:00:03.085) 0:19:07.563 ******* 2026-02-08 06:10:19.935204 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-08 06:10:19.935215 | orchestrator | 2026-02-08 06:10:19.935227 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:10:19.935238 | orchestrator | Sunday 08 February 2026 06:10:09 +0000 (0:00:00.162) 0:19:07.725 ******* 2026-02-08 06:10:19.935249 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-08 06:10:19.935260 | orchestrator | 2026-02-08 06:10:19.935270 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:10:19.935295 | orchestrator | Sunday 08 February 2026 06:10:09 +0000 (0:00:00.197) 0:19:07.923 ******* 2026-02-08 06:10:19.935309 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.935322 | orchestrator | 2026-02-08 06:10:19.935335 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:10:19.935347 | orchestrator | Sunday 08 February 2026 06:10:09 +0000 (0:00:00.120) 0:19:08.044 ******* 2026-02-08 06:10:19.935360 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.935373 | orchestrator | 2026-02-08 06:10:19.935385 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:10:19.935398 | orchestrator | Sunday 08 February 2026 06:10:10 +0000 (0:00:00.493) 0:19:08.537 ******* 2026-02-08 06:10:19.935411 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.935423 | orchestrator | 2026-02-08 06:10:19.935436 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:10:19.935450 | orchestrator | Sunday 08 February 2026 06:10:11 +0000 (0:00:00.516) 0:19:09.054 ******* 2026-02-08 06:10:19.935462 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.935473 | orchestrator | 2026-02-08 06:10:19.935484 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:10:19.935494 | orchestrator | Sunday 08 February 2026 06:10:11 +0000 (0:00:00.529) 0:19:09.583 ******* 2026-02-08 06:10:19.935505 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.935516 | orchestrator | 2026-02-08 06:10:19.935526 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:10:19.935542 | orchestrator | Sunday 08 February 2026 06:10:11 +0000 (0:00:00.259) 0:19:09.843 ******* 2026-02-08 06:10:19.935561 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.935579 | orchestrator | 2026-02-08 06:10:19.935598 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:10:19.935618 | orchestrator | Sunday 08 February 2026 06:10:12 +0000 (0:00:00.439) 0:19:10.282 ******* 2026-02-08 06:10:19.935637 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.935656 | orchestrator | 2026-02-08 06:10:19.935671 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:10:19.935682 | orchestrator | Sunday 08 February 2026 06:10:12 +0000 (0:00:00.145) 0:19:10.428 ******* 2026-02-08 06:10:19.935693 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.935712 | orchestrator | 2026-02-08 06:10:19.935723 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:10:19.935734 | orchestrator | Sunday 08 February 2026 06:10:12 +0000 (0:00:00.518) 0:19:10.946 ******* 2026-02-08 06:10:19.935745 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.935756 | orchestrator | 2026-02-08 06:10:19.935767 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:10:19.935777 | orchestrator | Sunday 08 February 2026 06:10:13 +0000 (0:00:00.525) 0:19:11.472 ******* 2026-02-08 06:10:19.935788 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.935799 | orchestrator | 2026-02-08 06:10:19.935810 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:10:19.935861 | orchestrator | Sunday 08 February 2026 06:10:13 +0000 (0:00:00.134) 0:19:11.607 ******* 2026-02-08 06:10:19.935873 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.935884 | orchestrator | 2026-02-08 06:10:19.935895 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:10:19.935906 | orchestrator | Sunday 08 February 2026 06:10:13 +0000 (0:00:00.144) 0:19:11.752 ******* 2026-02-08 06:10:19.935916 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.935928 | orchestrator | 2026-02-08 06:10:19.935938 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:10:19.935949 | orchestrator | Sunday 08 February 2026 06:10:13 +0000 (0:00:00.171) 0:19:11.923 ******* 2026-02-08 06:10:19.935960 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.935971 | orchestrator | 2026-02-08 06:10:19.935982 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:10:19.935993 | orchestrator | Sunday 08 February 2026 06:10:14 +0000 (0:00:00.159) 0:19:12.083 ******* 2026-02-08 06:10:19.936004 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.936015 | orchestrator | 2026-02-08 06:10:19.936045 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:10:19.936063 | orchestrator | Sunday 08 February 2026 06:10:14 +0000 (0:00:00.172) 0:19:12.255 ******* 2026-02-08 06:10:19.936081 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936096 | orchestrator | 2026-02-08 06:10:19.936116 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:10:19.936134 | orchestrator | Sunday 08 February 2026 06:10:14 +0000 (0:00:00.148) 0:19:12.404 ******* 2026-02-08 06:10:19.936152 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936172 | orchestrator | 2026-02-08 06:10:19.936190 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:10:19.936202 | orchestrator | Sunday 08 February 2026 06:10:14 +0000 (0:00:00.140) 0:19:12.545 ******* 2026-02-08 06:10:19.936213 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936224 | orchestrator | 2026-02-08 06:10:19.936234 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:10:19.936245 | orchestrator | Sunday 08 February 2026 06:10:14 +0000 (0:00:00.148) 0:19:12.694 ******* 2026-02-08 06:10:19.936256 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.936266 | orchestrator | 2026-02-08 06:10:19.936277 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:10:19.936288 | orchestrator | Sunday 08 February 2026 06:10:14 +0000 (0:00:00.145) 0:19:12.840 ******* 2026-02-08 06:10:19.936298 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.936309 | orchestrator | 2026-02-08 06:10:19.936320 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:10:19.936330 | orchestrator | Sunday 08 February 2026 06:10:15 +0000 (0:00:00.583) 0:19:13.423 ******* 2026-02-08 06:10:19.936341 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936351 | orchestrator | 2026-02-08 06:10:19.936362 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:10:19.936373 | orchestrator | Sunday 08 February 2026 06:10:15 +0000 (0:00:00.155) 0:19:13.579 ******* 2026-02-08 06:10:19.936384 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936403 | orchestrator | 2026-02-08 06:10:19.936421 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:10:19.936432 | orchestrator | Sunday 08 February 2026 06:10:15 +0000 (0:00:00.132) 0:19:13.712 ******* 2026-02-08 06:10:19.936443 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936454 | orchestrator | 2026-02-08 06:10:19.936464 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:10:19.936475 | orchestrator | Sunday 08 February 2026 06:10:15 +0000 (0:00:00.147) 0:19:13.859 ******* 2026-02-08 06:10:19.936485 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936496 | orchestrator | 2026-02-08 06:10:19.936507 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:10:19.936517 | orchestrator | Sunday 08 February 2026 06:10:15 +0000 (0:00:00.136) 0:19:13.996 ******* 2026-02-08 06:10:19.936528 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936539 | orchestrator | 2026-02-08 06:10:19.936549 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:10:19.936560 | orchestrator | Sunday 08 February 2026 06:10:16 +0000 (0:00:00.148) 0:19:14.144 ******* 2026-02-08 06:10:19.936571 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936585 | orchestrator | 2026-02-08 06:10:19.936604 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:10:19.936622 | orchestrator | Sunday 08 February 2026 06:10:16 +0000 (0:00:00.144) 0:19:14.289 ******* 2026-02-08 06:10:19.936642 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936662 | orchestrator | 2026-02-08 06:10:19.936682 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:10:19.936701 | orchestrator | Sunday 08 February 2026 06:10:16 +0000 (0:00:00.118) 0:19:14.408 ******* 2026-02-08 06:10:19.936714 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936724 | orchestrator | 2026-02-08 06:10:19.936735 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:10:19.936746 | orchestrator | Sunday 08 February 2026 06:10:16 +0000 (0:00:00.137) 0:19:14.546 ******* 2026-02-08 06:10:19.936757 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936768 | orchestrator | 2026-02-08 06:10:19.936779 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:10:19.936790 | orchestrator | Sunday 08 February 2026 06:10:16 +0000 (0:00:00.134) 0:19:14.681 ******* 2026-02-08 06:10:19.936801 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936811 | orchestrator | 2026-02-08 06:10:19.936873 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:10:19.936885 | orchestrator | Sunday 08 February 2026 06:10:16 +0000 (0:00:00.123) 0:19:14.804 ******* 2026-02-08 06:10:19.936896 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936907 | orchestrator | 2026-02-08 06:10:19.936918 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:10:19.936929 | orchestrator | Sunday 08 February 2026 06:10:16 +0000 (0:00:00.132) 0:19:14.936 ******* 2026-02-08 06:10:19.936940 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:19.936951 | orchestrator | 2026-02-08 06:10:19.936961 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:10:19.936973 | orchestrator | Sunday 08 February 2026 06:10:17 +0000 (0:00:00.556) 0:19:15.492 ******* 2026-02-08 06:10:19.936983 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.936994 | orchestrator | 2026-02-08 06:10:19.937005 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:10:19.937016 | orchestrator | Sunday 08 February 2026 06:10:18 +0000 (0:00:00.957) 0:19:16.450 ******* 2026-02-08 06:10:19.937027 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:19.937038 | orchestrator | 2026-02-08 06:10:19.937049 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:10:19.937060 | orchestrator | Sunday 08 February 2026 06:10:19 +0000 (0:00:01.301) 0:19:17.752 ******* 2026-02-08 06:10:19.937071 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-08 06:10:19.937090 | orchestrator | 2026-02-08 06:10:19.937110 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:10:35.639310 | orchestrator | Sunday 08 February 2026 06:10:19 +0000 (0:00:00.219) 0:19:17.972 ******* 2026-02-08 06:10:35.639414 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.639429 | orchestrator | 2026-02-08 06:10:35.639441 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:10:35.639452 | orchestrator | Sunday 08 February 2026 06:10:20 +0000 (0:00:00.141) 0:19:18.114 ******* 2026-02-08 06:10:35.639462 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.639472 | orchestrator | 2026-02-08 06:10:35.639483 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:10:35.639493 | orchestrator | Sunday 08 February 2026 06:10:20 +0000 (0:00:00.148) 0:19:18.262 ******* 2026-02-08 06:10:35.639503 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:10:35.639513 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:10:35.639524 | orchestrator | 2026-02-08 06:10:35.639534 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:10:35.639543 | orchestrator | Sunday 08 February 2026 06:10:21 +0000 (0:00:00.868) 0:19:19.131 ******* 2026-02-08 06:10:35.639553 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:35.639564 | orchestrator | 2026-02-08 06:10:35.639574 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:10:35.639584 | orchestrator | Sunday 08 February 2026 06:10:21 +0000 (0:00:00.498) 0:19:19.629 ******* 2026-02-08 06:10:35.639593 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.639603 | orchestrator | 2026-02-08 06:10:35.639618 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:10:35.639635 | orchestrator | Sunday 08 February 2026 06:10:21 +0000 (0:00:00.158) 0:19:19.788 ******* 2026-02-08 06:10:35.639651 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.639668 | orchestrator | 2026-02-08 06:10:35.639685 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:10:35.639723 | orchestrator | Sunday 08 February 2026 06:10:21 +0000 (0:00:00.147) 0:19:19.936 ******* 2026-02-08 06:10:35.639742 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.639761 | orchestrator | 2026-02-08 06:10:35.639772 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:10:35.639782 | orchestrator | Sunday 08 February 2026 06:10:22 +0000 (0:00:00.128) 0:19:20.064 ******* 2026-02-08 06:10:35.639791 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-08 06:10:35.639802 | orchestrator | 2026-02-08 06:10:35.639811 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:10:35.639863 | orchestrator | Sunday 08 February 2026 06:10:22 +0000 (0:00:00.529) 0:19:20.594 ******* 2026-02-08 06:10:35.639875 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:35.639887 | orchestrator | 2026-02-08 06:10:35.639898 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:10:35.639910 | orchestrator | Sunday 08 February 2026 06:10:23 +0000 (0:00:00.706) 0:19:21.300 ******* 2026-02-08 06:10:35.639922 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:10:35.639933 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:10:35.639944 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:10:35.639956 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.639968 | orchestrator | 2026-02-08 06:10:35.639980 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:10:35.639991 | orchestrator | Sunday 08 February 2026 06:10:23 +0000 (0:00:00.154) 0:19:21.455 ******* 2026-02-08 06:10:35.640024 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640035 | orchestrator | 2026-02-08 06:10:35.640044 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:10:35.640054 | orchestrator | Sunday 08 February 2026 06:10:23 +0000 (0:00:00.126) 0:19:21.582 ******* 2026-02-08 06:10:35.640064 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640073 | orchestrator | 2026-02-08 06:10:35.640083 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:10:35.640093 | orchestrator | Sunday 08 February 2026 06:10:23 +0000 (0:00:00.176) 0:19:21.759 ******* 2026-02-08 06:10:35.640102 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640112 | orchestrator | 2026-02-08 06:10:35.640122 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:10:35.640131 | orchestrator | Sunday 08 February 2026 06:10:23 +0000 (0:00:00.155) 0:19:21.915 ******* 2026-02-08 06:10:35.640141 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640151 | orchestrator | 2026-02-08 06:10:35.640161 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:10:35.640171 | orchestrator | Sunday 08 February 2026 06:10:24 +0000 (0:00:00.158) 0:19:22.073 ******* 2026-02-08 06:10:35.640180 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640190 | orchestrator | 2026-02-08 06:10:35.640200 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:10:35.640209 | orchestrator | Sunday 08 February 2026 06:10:24 +0000 (0:00:00.153) 0:19:22.227 ******* 2026-02-08 06:10:35.640219 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:35.640229 | orchestrator | 2026-02-08 06:10:35.640238 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:10:35.640248 | orchestrator | Sunday 08 February 2026 06:10:25 +0000 (0:00:01.603) 0:19:23.830 ******* 2026-02-08 06:10:35.640258 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:35.640267 | orchestrator | 2026-02-08 06:10:35.640277 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:10:35.640287 | orchestrator | Sunday 08 February 2026 06:10:25 +0000 (0:00:00.132) 0:19:23.962 ******* 2026-02-08 06:10:35.640296 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-08 06:10:35.640306 | orchestrator | 2026-02-08 06:10:35.640333 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:10:35.640343 | orchestrator | Sunday 08 February 2026 06:10:26 +0000 (0:00:00.216) 0:19:24.179 ******* 2026-02-08 06:10:35.640353 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640363 | orchestrator | 2026-02-08 06:10:35.640373 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:10:35.640382 | orchestrator | Sunday 08 February 2026 06:10:26 +0000 (0:00:00.156) 0:19:24.335 ******* 2026-02-08 06:10:35.640392 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640402 | orchestrator | 2026-02-08 06:10:35.640411 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:10:35.640421 | orchestrator | Sunday 08 February 2026 06:10:26 +0000 (0:00:00.442) 0:19:24.778 ******* 2026-02-08 06:10:35.640431 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640440 | orchestrator | 2026-02-08 06:10:35.640450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:10:35.640459 | orchestrator | Sunday 08 February 2026 06:10:26 +0000 (0:00:00.147) 0:19:24.925 ******* 2026-02-08 06:10:35.640469 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640479 | orchestrator | 2026-02-08 06:10:35.640488 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:10:35.640498 | orchestrator | Sunday 08 February 2026 06:10:27 +0000 (0:00:00.167) 0:19:25.093 ******* 2026-02-08 06:10:35.640508 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640518 | orchestrator | 2026-02-08 06:10:35.640527 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:10:35.640537 | orchestrator | Sunday 08 February 2026 06:10:27 +0000 (0:00:00.157) 0:19:25.250 ******* 2026-02-08 06:10:35.640553 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640563 | orchestrator | 2026-02-08 06:10:35.640573 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:10:35.640582 | orchestrator | Sunday 08 February 2026 06:10:27 +0000 (0:00:00.159) 0:19:25.410 ******* 2026-02-08 06:10:35.640592 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640602 | orchestrator | 2026-02-08 06:10:35.640617 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:10:35.640627 | orchestrator | Sunday 08 February 2026 06:10:27 +0000 (0:00:00.146) 0:19:25.556 ******* 2026-02-08 06:10:35.640637 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:35.640647 | orchestrator | 2026-02-08 06:10:35.640656 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:10:35.640666 | orchestrator | Sunday 08 February 2026 06:10:27 +0000 (0:00:00.153) 0:19:25.710 ******* 2026-02-08 06:10:35.640676 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:35.640685 | orchestrator | 2026-02-08 06:10:35.640695 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:10:35.640706 | orchestrator | Sunday 08 February 2026 06:10:27 +0000 (0:00:00.227) 0:19:25.938 ******* 2026-02-08 06:10:35.640724 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-08 06:10:35.640760 | orchestrator | 2026-02-08 06:10:35.640780 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:10:35.640796 | orchestrator | Sunday 08 February 2026 06:10:28 +0000 (0:00:00.202) 0:19:26.140 ******* 2026-02-08 06:10:35.640813 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-08 06:10:35.640858 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-08 06:10:35.640873 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-08 06:10:35.640889 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-08 06:10:35.640906 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-08 06:10:35.640922 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-08 06:10:35.640937 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-08 06:10:35.640953 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:10:35.640969 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:10:35.640986 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:10:35.641002 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:10:35.641019 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:10:35.641035 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:10:35.641046 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:10:35.641060 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-08 06:10:35.641077 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-08 06:10:35.641092 | orchestrator | 2026-02-08 06:10:35.641107 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:10:35.641123 | orchestrator | Sunday 08 February 2026 06:10:33 +0000 (0:00:05.538) 0:19:31.679 ******* 2026-02-08 06:10:35.641138 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-08 06:10:35.641153 | orchestrator | 2026-02-08 06:10:35.641170 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-08 06:10:35.641187 | orchestrator | Sunday 08 February 2026 06:10:34 +0000 (0:00:00.540) 0:19:32.220 ******* 2026-02-08 06:10:35.641204 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:10:35.641224 | orchestrator | 2026-02-08 06:10:35.641243 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-08 06:10:35.641276 | orchestrator | Sunday 08 February 2026 06:10:34 +0000 (0:00:00.486) 0:19:32.706 ******* 2026-02-08 06:10:35.641295 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:10:35.641314 | orchestrator | 2026-02-08 06:10:35.641347 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:10:54.965003 | orchestrator | Sunday 08 February 2026 06:10:35 +0000 (0:00:00.970) 0:19:33.677 ******* 2026-02-08 06:10:54.965146 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965176 | orchestrator | 2026-02-08 06:10:54.965199 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:10:54.965219 | orchestrator | Sunday 08 February 2026 06:10:35 +0000 (0:00:00.169) 0:19:33.846 ******* 2026-02-08 06:10:54.965240 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965252 | orchestrator | 2026-02-08 06:10:54.965264 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:10:54.965275 | orchestrator | Sunday 08 February 2026 06:10:35 +0000 (0:00:00.146) 0:19:33.993 ******* 2026-02-08 06:10:54.965286 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965297 | orchestrator | 2026-02-08 06:10:54.965309 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:10:54.965320 | orchestrator | Sunday 08 February 2026 06:10:36 +0000 (0:00:00.137) 0:19:34.130 ******* 2026-02-08 06:10:54.965331 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965342 | orchestrator | 2026-02-08 06:10:54.965353 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:10:54.965364 | orchestrator | Sunday 08 February 2026 06:10:36 +0000 (0:00:00.142) 0:19:34.272 ******* 2026-02-08 06:10:54.965375 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965386 | orchestrator | 2026-02-08 06:10:54.965397 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:10:54.965409 | orchestrator | Sunday 08 February 2026 06:10:36 +0000 (0:00:00.138) 0:19:34.411 ******* 2026-02-08 06:10:54.965420 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965431 | orchestrator | 2026-02-08 06:10:54.965442 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:10:54.965453 | orchestrator | Sunday 08 February 2026 06:10:36 +0000 (0:00:00.147) 0:19:34.559 ******* 2026-02-08 06:10:54.965484 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965497 | orchestrator | 2026-02-08 06:10:54.965509 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:10:54.965522 | orchestrator | Sunday 08 February 2026 06:10:36 +0000 (0:00:00.128) 0:19:34.687 ******* 2026-02-08 06:10:54.965535 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965551 | orchestrator | 2026-02-08 06:10:54.965571 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:10:54.965590 | orchestrator | Sunday 08 February 2026 06:10:36 +0000 (0:00:00.151) 0:19:34.839 ******* 2026-02-08 06:10:54.965602 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965615 | orchestrator | 2026-02-08 06:10:54.965627 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:10:54.965641 | orchestrator | Sunday 08 February 2026 06:10:36 +0000 (0:00:00.132) 0:19:34.971 ******* 2026-02-08 06:10:54.965655 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965665 | orchestrator | 2026-02-08 06:10:54.965676 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:10:54.965687 | orchestrator | Sunday 08 February 2026 06:10:37 +0000 (0:00:00.144) 0:19:35.116 ******* 2026-02-08 06:10:54.965698 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:54.965710 | orchestrator | 2026-02-08 06:10:54.965721 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:10:54.965732 | orchestrator | Sunday 08 February 2026 06:10:37 +0000 (0:00:00.241) 0:19:35.357 ******* 2026-02-08 06:10:54.965766 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-08 06:10:54.965778 | orchestrator | 2026-02-08 06:10:54.965789 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:10:54.965800 | orchestrator | Sunday 08 February 2026 06:10:41 +0000 (0:00:04.147) 0:19:39.505 ******* 2026-02-08 06:10:54.965811 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:10:54.965823 | orchestrator | 2026-02-08 06:10:54.965860 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:10:54.965871 | orchestrator | Sunday 08 February 2026 06:10:41 +0000 (0:00:00.191) 0:19:39.696 ******* 2026-02-08 06:10:54.965885 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-08 06:10:54.965899 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-08 06:10:54.965912 | orchestrator | 2026-02-08 06:10:54.965923 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:10:54.965934 | orchestrator | Sunday 08 February 2026 06:10:48 +0000 (0:00:06.773) 0:19:46.470 ******* 2026-02-08 06:10:54.965945 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965956 | orchestrator | 2026-02-08 06:10:54.965967 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:10:54.965977 | orchestrator | Sunday 08 February 2026 06:10:48 +0000 (0:00:00.142) 0:19:46.613 ******* 2026-02-08 06:10:54.965988 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.965999 | orchestrator | 2026-02-08 06:10:54.966092 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:10:54.966106 | orchestrator | Sunday 08 February 2026 06:10:48 +0000 (0:00:00.139) 0:19:46.752 ******* 2026-02-08 06:10:54.966117 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.966128 | orchestrator | 2026-02-08 06:10:54.966144 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:10:54.966167 | orchestrator | Sunday 08 February 2026 06:10:48 +0000 (0:00:00.165) 0:19:46.918 ******* 2026-02-08 06:10:54.966178 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.966188 | orchestrator | 2026-02-08 06:10:54.966199 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:10:54.966210 | orchestrator | Sunday 08 February 2026 06:10:49 +0000 (0:00:00.181) 0:19:47.100 ******* 2026-02-08 06:10:54.966221 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.966232 | orchestrator | 2026-02-08 06:10:54.966243 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:10:54.966254 | orchestrator | Sunday 08 February 2026 06:10:49 +0000 (0:00:00.163) 0:19:47.263 ******* 2026-02-08 06:10:54.966265 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:54.966276 | orchestrator | 2026-02-08 06:10:54.966288 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:10:54.966299 | orchestrator | Sunday 08 February 2026 06:10:49 +0000 (0:00:00.249) 0:19:47.513 ******* 2026-02-08 06:10:54.966310 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:10:54.966321 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:10:54.966332 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:10:54.966343 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.966364 | orchestrator | 2026-02-08 06:10:54.966375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:10:54.966386 | orchestrator | Sunday 08 February 2026 06:10:49 +0000 (0:00:00.407) 0:19:47.920 ******* 2026-02-08 06:10:54.966404 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:10:54.966415 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:10:54.966426 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:10:54.966437 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.966448 | orchestrator | 2026-02-08 06:10:54.966459 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:10:54.966470 | orchestrator | Sunday 08 February 2026 06:10:50 +0000 (0:00:00.407) 0:19:48.327 ******* 2026-02-08 06:10:54.966481 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:10:54.966492 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:10:54.966503 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:10:54.966513 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.966524 | orchestrator | 2026-02-08 06:10:54.966535 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:10:54.966546 | orchestrator | Sunday 08 February 2026 06:10:51 +0000 (0:00:00.771) 0:19:49.099 ******* 2026-02-08 06:10:54.966557 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:54.966568 | orchestrator | 2026-02-08 06:10:54.966580 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:10:54.966591 | orchestrator | Sunday 08 February 2026 06:10:51 +0000 (0:00:00.172) 0:19:49.271 ******* 2026-02-08 06:10:54.966602 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 06:10:54.966613 | orchestrator | 2026-02-08 06:10:54.966624 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:10:54.966634 | orchestrator | Sunday 08 February 2026 06:10:52 +0000 (0:00:01.106) 0:19:50.378 ******* 2026-02-08 06:10:54.966645 | orchestrator | changed: [testbed-node-5] 2026-02-08 06:10:54.966656 | orchestrator | 2026-02-08 06:10:54.966667 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-08 06:10:54.966678 | orchestrator | Sunday 08 February 2026 06:10:53 +0000 (0:00:00.874) 0:19:51.252 ******* 2026-02-08 06:10:54.966689 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:54.966700 | orchestrator | 2026-02-08 06:10:54.966711 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-08 06:10:54.966722 | orchestrator | Sunday 08 February 2026 06:10:53 +0000 (0:00:00.151) 0:19:51.404 ******* 2026-02-08 06:10:54.966733 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:10:54.966744 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:10:54.966755 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:10:54.966766 | orchestrator | 2026-02-08 06:10:54.966777 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-08 06:10:54.966787 | orchestrator | Sunday 08 February 2026 06:10:54 +0000 (0:00:00.688) 0:19:52.092 ******* 2026-02-08 06:10:54.966798 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-08 06:10:54.966809 | orchestrator | 2026-02-08 06:10:54.966820 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-08 06:10:54.966860 | orchestrator | Sunday 08 February 2026 06:10:54 +0000 (0:00:00.197) 0:19:52.290 ******* 2026-02-08 06:10:54.966871 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.966882 | orchestrator | 2026-02-08 06:10:54.966893 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-08 06:10:54.966904 | orchestrator | Sunday 08 February 2026 06:10:54 +0000 (0:00:00.136) 0:19:52.427 ******* 2026-02-08 06:10:54.966915 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:10:54.966927 | orchestrator | 2026-02-08 06:10:54.966944 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-08 06:10:54.966955 | orchestrator | Sunday 08 February 2026 06:10:54 +0000 (0:00:00.138) 0:19:52.565 ******* 2026-02-08 06:10:54.966966 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:10:54.966977 | orchestrator | 2026-02-08 06:10:54.966996 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-08 06:11:34.264708 | orchestrator | Sunday 08 February 2026 06:10:54 +0000 (0:00:00.436) 0:19:53.001 ******* 2026-02-08 06:11:34.264822 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:11:34.264839 | orchestrator | 2026-02-08 06:11:34.264928 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-08 06:11:34.264949 | orchestrator | Sunday 08 February 2026 06:10:55 +0000 (0:00:00.157) 0:19:53.159 ******* 2026-02-08 06:11:34.264967 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-08 06:11:34.264988 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-08 06:11:34.265009 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-08 06:11:34.265029 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-08 06:11:34.265042 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-08 06:11:34.265053 | orchestrator | 2026-02-08 06:11:34.265064 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-08 06:11:34.265075 | orchestrator | Sunday 08 February 2026 06:10:56 +0000 (0:00:01.871) 0:19:55.031 ******* 2026-02-08 06:11:34.265086 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.265098 | orchestrator | 2026-02-08 06:11:34.265109 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-08 06:11:34.265120 | orchestrator | Sunday 08 February 2026 06:10:57 +0000 (0:00:00.440) 0:19:55.471 ******* 2026-02-08 06:11:34.265131 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-08 06:11:34.265142 | orchestrator | 2026-02-08 06:11:34.265153 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-08 06:11:34.265163 | orchestrator | Sunday 08 February 2026 06:10:57 +0000 (0:00:00.201) 0:19:55.673 ******* 2026-02-08 06:11:34.265190 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-08 06:11:34.265202 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-08 06:11:34.265212 | orchestrator | 2026-02-08 06:11:34.265225 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-08 06:11:34.265236 | orchestrator | Sunday 08 February 2026 06:10:58 +0000 (0:00:00.817) 0:19:56.491 ******* 2026-02-08 06:11:34.265252 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:11:34.265270 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-08 06:11:34.265288 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:11:34.265306 | orchestrator | 2026-02-08 06:11:34.265324 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:11:34.265342 | orchestrator | Sunday 08 February 2026 06:11:00 +0000 (0:00:02.231) 0:19:58.723 ******* 2026-02-08 06:11:34.265360 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-08 06:11:34.265379 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-08 06:11:34.265398 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:11:34.265417 | orchestrator | 2026-02-08 06:11:34.265437 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-08 06:11:34.265456 | orchestrator | Sunday 08 February 2026 06:11:01 +0000 (0:00:00.989) 0:19:59.712 ******* 2026-02-08 06:11:34.265474 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.265486 | orchestrator | 2026-02-08 06:11:34.265497 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-08 06:11:34.265508 | orchestrator | Sunday 08 February 2026 06:11:01 +0000 (0:00:00.251) 0:19:59.963 ******* 2026-02-08 06:11:34.265542 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.265554 | orchestrator | 2026-02-08 06:11:34.265565 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-08 06:11:34.265576 | orchestrator | Sunday 08 February 2026 06:11:02 +0000 (0:00:00.149) 0:20:00.112 ******* 2026-02-08 06:11:34.265587 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.265598 | orchestrator | 2026-02-08 06:11:34.265609 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-08 06:11:34.265619 | orchestrator | Sunday 08 February 2026 06:11:02 +0000 (0:00:00.125) 0:20:00.238 ******* 2026-02-08 06:11:34.265630 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-08 06:11:34.265641 | orchestrator | 2026-02-08 06:11:34.265652 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-08 06:11:34.265663 | orchestrator | Sunday 08 February 2026 06:11:02 +0000 (0:00:00.214) 0:20:00.453 ******* 2026-02-08 06:11:34.265673 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:11:34.265684 | orchestrator | 2026-02-08 06:11:34.265699 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-08 06:11:34.265718 | orchestrator | Sunday 08 February 2026 06:11:02 +0000 (0:00:00.476) 0:20:00.930 ******* 2026-02-08 06:11:34.265736 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:11:34.265755 | orchestrator | 2026-02-08 06:11:34.265774 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-08 06:11:34.265793 | orchestrator | Sunday 08 February 2026 06:11:05 +0000 (0:00:02.309) 0:20:03.240 ******* 2026-02-08 06:11:34.265812 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-08 06:11:34.265831 | orchestrator | 2026-02-08 06:11:34.265890 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-08 06:11:34.265903 | orchestrator | Sunday 08 February 2026 06:11:05 +0000 (0:00:00.548) 0:20:03.788 ******* 2026-02-08 06:11:34.265914 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:11:34.265925 | orchestrator | 2026-02-08 06:11:34.265936 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-08 06:11:34.265946 | orchestrator | Sunday 08 February 2026 06:11:06 +0000 (0:00:00.976) 0:20:04.765 ******* 2026-02-08 06:11:34.265957 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:11:34.265968 | orchestrator | 2026-02-08 06:11:34.265979 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-08 06:11:34.266010 | orchestrator | Sunday 08 February 2026 06:11:07 +0000 (0:00:00.946) 0:20:05.712 ******* 2026-02-08 06:11:34.266088 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:11:34.266108 | orchestrator | 2026-02-08 06:11:34.266127 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-08 06:11:34.266146 | orchestrator | Sunday 08 February 2026 06:11:08 +0000 (0:00:01.229) 0:20:06.941 ******* 2026-02-08 06:11:34.266166 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.266185 | orchestrator | 2026-02-08 06:11:34.266202 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-08 06:11:34.266214 | orchestrator | Sunday 08 February 2026 06:11:09 +0000 (0:00:00.197) 0:20:07.138 ******* 2026-02-08 06:11:34.266225 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.266235 | orchestrator | 2026-02-08 06:11:34.266246 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-08 06:11:34.266257 | orchestrator | Sunday 08 February 2026 06:11:09 +0000 (0:00:00.149) 0:20:07.288 ******* 2026-02-08 06:11:34.266268 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-08 06:11:34.266279 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-08 06:11:34.266290 | orchestrator | 2026-02-08 06:11:34.266303 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-08 06:11:34.266321 | orchestrator | Sunday 08 February 2026 06:11:10 +0000 (0:00:00.826) 0:20:08.115 ******* 2026-02-08 06:11:34.266375 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-08 06:11:34.266394 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-08 06:11:34.266427 | orchestrator | 2026-02-08 06:11:34.266445 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-08 06:11:34.266463 | orchestrator | Sunday 08 February 2026 06:11:11 +0000 (0:00:01.872) 0:20:09.987 ******* 2026-02-08 06:11:34.266481 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-08 06:11:34.266500 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-08 06:11:34.266519 | orchestrator | 2026-02-08 06:11:34.266548 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-08 06:11:34.266567 | orchestrator | Sunday 08 February 2026 06:11:15 +0000 (0:00:03.572) 0:20:13.559 ******* 2026-02-08 06:11:34.266586 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.266605 | orchestrator | 2026-02-08 06:11:34.266623 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-08 06:11:34.266639 | orchestrator | Sunday 08 February 2026 06:11:15 +0000 (0:00:00.290) 0:20:13.849 ******* 2026-02-08 06:11:34.266650 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-08 06:11:34.266661 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:11:34.266672 | orchestrator | 2026-02-08 06:11:34.266683 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-08 06:11:34.266694 | orchestrator | Sunday 08 February 2026 06:11:28 +0000 (0:00:12.385) 0:20:26.235 ******* 2026-02-08 06:11:34.266704 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.266715 | orchestrator | 2026-02-08 06:11:34.266726 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-08 06:11:34.266737 | orchestrator | Sunday 08 February 2026 06:11:28 +0000 (0:00:00.321) 0:20:26.557 ******* 2026-02-08 06:11:34.266748 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.266759 | orchestrator | 2026-02-08 06:11:34.266769 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-08 06:11:34.266780 | orchestrator | Sunday 08 February 2026 06:11:28 +0000 (0:00:00.478) 0:20:27.035 ******* 2026-02-08 06:11:34.266791 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.266802 | orchestrator | 2026-02-08 06:11:34.266813 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-08 06:11:34.266830 | orchestrator | Sunday 08 February 2026 06:11:29 +0000 (0:00:00.160) 0:20:27.196 ******* 2026-02-08 06:11:34.266872 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-08 06:11:34.266892 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:11:34.266910 | orchestrator | 2026-02-08 06:11:34.266929 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-08 06:11:34.266948 | orchestrator | Sunday 08 February 2026 06:11:33 +0000 (0:00:04.297) 0:20:31.493 ******* 2026-02-08 06:11:34.266966 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.266980 | orchestrator | 2026-02-08 06:11:34.266991 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-08 06:11:34.267002 | orchestrator | Sunday 08 February 2026 06:11:33 +0000 (0:00:00.134) 0:20:31.628 ******* 2026-02-08 06:11:34.267012 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.267023 | orchestrator | 2026-02-08 06:11:34.267034 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-08 06:11:34.267045 | orchestrator | Sunday 08 February 2026 06:11:33 +0000 (0:00:00.133) 0:20:31.762 ******* 2026-02-08 06:11:34.267056 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.267067 | orchestrator | 2026-02-08 06:11:34.267077 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-08 06:11:34.267088 | orchestrator | Sunday 08 February 2026 06:11:33 +0000 (0:00:00.136) 0:20:31.899 ******* 2026-02-08 06:11:34.267099 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.267110 | orchestrator | 2026-02-08 06:11:34.267121 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-08 06:11:34.267143 | orchestrator | Sunday 08 February 2026 06:11:33 +0000 (0:00:00.136) 0:20:32.035 ******* 2026-02-08 06:11:34.267154 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.267165 | orchestrator | 2026-02-08 06:11:34.267176 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-08 06:11:34.267187 | orchestrator | Sunday 08 February 2026 06:11:34 +0000 (0:00:00.128) 0:20:32.163 ******* 2026-02-08 06:11:34.267199 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:11:34.267218 | orchestrator | 2026-02-08 06:11:34.267236 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-08 06:11:34.267270 | orchestrator | Sunday 08 February 2026 06:11:34 +0000 (0:00:00.137) 0:20:32.300 ******* 2026-02-08 06:13:06.393538 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:06.393651 | orchestrator | 2026-02-08 06:13:06.393667 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-08 06:13:06.393678 | orchestrator | 2026-02-08 06:13:06.393689 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:13:06.393700 | orchestrator | Sunday 08 February 2026 06:11:35 +0000 (0:00:01.176) 0:20:33.477 ******* 2026-02-08 06:13:06.393711 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:13:06.393723 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:13:06.393733 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:06.393743 | orchestrator | 2026-02-08 06:13:06.393754 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:13:06.393763 | orchestrator | Sunday 08 February 2026 06:11:36 +0000 (0:00:00.700) 0:20:34.178 ******* 2026-02-08 06:13:06.393775 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:13:06.393784 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:13:06.393795 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:06.393805 | orchestrator | 2026-02-08 06:13:06.393816 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-08 06:13:06.393826 | orchestrator | Sunday 08 February 2026 06:11:36 +0000 (0:00:00.568) 0:20:34.746 ******* 2026-02-08 06:13:06.393836 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-08 06:13:06.393848 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-08 06:13:06.393859 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-08 06:13:06.393869 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-08 06:13:06.393946 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-08 06:13:06.393958 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-08 06:13:06.393968 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-08 06:13:06.393979 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-08 06:13:06.393989 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-08 06:13:06.394000 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-08 06:13:06.394010 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-08 06:13:06.394075 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-08 06:13:06.394086 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-08 06:13:06.394097 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-08 06:13:06.394109 | orchestrator | 2026-02-08 06:13:06.394120 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-08 06:13:06.394154 | orchestrator | Sunday 08 February 2026 06:12:51 +0000 (0:01:14.424) 0:21:49.171 ******* 2026-02-08 06:13:06.394165 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-08 06:13:06.394177 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-08 06:13:06.394188 | orchestrator | 2026-02-08 06:13:06.394198 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-08 06:13:06.394209 | orchestrator | Sunday 08 February 2026 06:12:55 +0000 (0:00:04.712) 0:21:53.883 ******* 2026-02-08 06:13:06.394221 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:13:06.394233 | orchestrator | 2026-02-08 06:13:06.394245 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-08 06:13:06.394252 | orchestrator | 2026-02-08 06:13:06.394260 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:13:06.394268 | orchestrator | Sunday 08 February 2026 06:12:58 +0000 (0:00:02.871) 0:21:56.755 ******* 2026-02-08 06:13:06.394275 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-08 06:13:06.394283 | orchestrator | 2026-02-08 06:13:06.394290 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:13:06.394297 | orchestrator | Sunday 08 February 2026 06:12:58 +0000 (0:00:00.283) 0:21:57.038 ******* 2026-02-08 06:13:06.394305 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:06.394314 | orchestrator | 2026-02-08 06:13:06.394321 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:13:06.394338 | orchestrator | Sunday 08 February 2026 06:12:59 +0000 (0:00:00.484) 0:21:57.523 ******* 2026-02-08 06:13:06.394345 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:06.394353 | orchestrator | 2026-02-08 06:13:06.394360 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:13:06.394367 | orchestrator | Sunday 08 February 2026 06:12:59 +0000 (0:00:00.177) 0:21:57.701 ******* 2026-02-08 06:13:06.394374 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:06.394382 | orchestrator | 2026-02-08 06:13:06.394389 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:13:06.394397 | orchestrator | Sunday 08 February 2026 06:13:00 +0000 (0:00:00.455) 0:21:58.156 ******* 2026-02-08 06:13:06.394404 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:06.394411 | orchestrator | 2026-02-08 06:13:06.394419 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:13:06.394444 | orchestrator | Sunday 08 February 2026 06:13:00 +0000 (0:00:00.163) 0:21:58.320 ******* 2026-02-08 06:13:06.394452 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:06.394460 | orchestrator | 2026-02-08 06:13:06.394467 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:13:06.394475 | orchestrator | Sunday 08 February 2026 06:13:00 +0000 (0:00:00.179) 0:21:58.500 ******* 2026-02-08 06:13:06.394481 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:06.394487 | orchestrator | 2026-02-08 06:13:06.394493 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:13:06.394500 | orchestrator | Sunday 08 February 2026 06:13:00 +0000 (0:00:00.200) 0:21:58.700 ******* 2026-02-08 06:13:06.394506 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:06.394513 | orchestrator | 2026-02-08 06:13:06.394519 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:13:06.394525 | orchestrator | Sunday 08 February 2026 06:13:00 +0000 (0:00:00.190) 0:21:58.891 ******* 2026-02-08 06:13:06.394531 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:06.394537 | orchestrator | 2026-02-08 06:13:06.394544 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:13:06.394550 | orchestrator | Sunday 08 February 2026 06:13:01 +0000 (0:00:00.212) 0:21:59.103 ******* 2026-02-08 06:13:06.394557 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:13:06.394563 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:13:06.394576 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:13:06.394582 | orchestrator | 2026-02-08 06:13:06.394588 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:13:06.394595 | orchestrator | Sunday 08 February 2026 06:13:02 +0000 (0:00:01.086) 0:22:00.190 ******* 2026-02-08 06:13:06.394601 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:06.394607 | orchestrator | 2026-02-08 06:13:06.394619 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:13:06.394625 | orchestrator | Sunday 08 February 2026 06:13:02 +0000 (0:00:00.271) 0:22:00.461 ******* 2026-02-08 06:13:06.394632 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:13:06.394638 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:13:06.394644 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:13:06.394651 | orchestrator | 2026-02-08 06:13:06.394657 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:13:06.394663 | orchestrator | Sunday 08 February 2026 06:13:05 +0000 (0:00:02.605) 0:22:03.067 ******* 2026-02-08 06:13:06.394669 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 06:13:06.394675 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 06:13:06.394682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 06:13:06.394688 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:06.394694 | orchestrator | 2026-02-08 06:13:06.394701 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:13:06.394707 | orchestrator | Sunday 08 February 2026 06:13:05 +0000 (0:00:00.467) 0:22:03.534 ******* 2026-02-08 06:13:06.394714 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:13:06.394723 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:13:06.394730 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:13:06.394736 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:06.394743 | orchestrator | 2026-02-08 06:13:06.394749 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:13:06.394755 | orchestrator | Sunday 08 February 2026 06:13:06 +0000 (0:00:00.678) 0:22:04.213 ******* 2026-02-08 06:13:06.394763 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:06.394773 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:06.394785 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:10.669385 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.669487 | orchestrator | 2026-02-08 06:13:10.669503 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:13:10.669516 | orchestrator | Sunday 08 February 2026 06:13:06 +0000 (0:00:00.218) 0:22:04.432 ******* 2026-02-08 06:13:10.669529 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:13:03.299118', 'end': '2026-02-08 06:13:03.356336', 'delta': '0:00:00.057218', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:13:10.669559 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:13:03.873627', 'end': '2026-02-08 06:13:03.925820', 'delta': '0:00:00.052193', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:13:10.669571 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:13:04.458435', 'end': '2026-02-08 06:13:04.524187', 'delta': '0:00:00.065752', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:13:10.669581 | orchestrator | 2026-02-08 06:13:10.669591 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:13:10.669601 | orchestrator | Sunday 08 February 2026 06:13:06 +0000 (0:00:00.250) 0:22:04.682 ******* 2026-02-08 06:13:10.669612 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:10.669623 | orchestrator | 2026-02-08 06:13:10.669633 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:13:10.669642 | orchestrator | Sunday 08 February 2026 06:13:06 +0000 (0:00:00.292) 0:22:04.975 ******* 2026-02-08 06:13:10.669652 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.669662 | orchestrator | 2026-02-08 06:13:10.669672 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:13:10.669682 | orchestrator | Sunday 08 February 2026 06:13:07 +0000 (0:00:00.253) 0:22:05.228 ******* 2026-02-08 06:13:10.669691 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:10.669701 | orchestrator | 2026-02-08 06:13:10.669711 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:13:10.669720 | orchestrator | Sunday 08 February 2026 06:13:07 +0000 (0:00:00.157) 0:22:05.386 ******* 2026-02-08 06:13:10.669750 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:10.669761 | orchestrator | 2026-02-08 06:13:10.669771 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:13:10.669780 | orchestrator | Sunday 08 February 2026 06:13:08 +0000 (0:00:01.006) 0:22:06.393 ******* 2026-02-08 06:13:10.669790 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:10.669799 | orchestrator | 2026-02-08 06:13:10.669809 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:13:10.669819 | orchestrator | Sunday 08 February 2026 06:13:08 +0000 (0:00:00.150) 0:22:06.543 ******* 2026-02-08 06:13:10.669828 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.669838 | orchestrator | 2026-02-08 06:13:10.669848 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:13:10.669858 | orchestrator | Sunday 08 February 2026 06:13:08 +0000 (0:00:00.145) 0:22:06.689 ******* 2026-02-08 06:13:10.669867 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.669910 | orchestrator | 2026-02-08 06:13:10.669922 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:13:10.669933 | orchestrator | Sunday 08 February 2026 06:13:08 +0000 (0:00:00.285) 0:22:06.975 ******* 2026-02-08 06:13:10.669945 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.669956 | orchestrator | 2026-02-08 06:13:10.669983 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:13:10.669995 | orchestrator | Sunday 08 February 2026 06:13:09 +0000 (0:00:00.165) 0:22:07.140 ******* 2026-02-08 06:13:10.670006 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.670069 | orchestrator | 2026-02-08 06:13:10.670081 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:13:10.670092 | orchestrator | Sunday 08 February 2026 06:13:09 +0000 (0:00:00.481) 0:22:07.621 ******* 2026-02-08 06:13:10.670103 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.670122 | orchestrator | 2026-02-08 06:13:10.670133 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:13:10.670144 | orchestrator | Sunday 08 February 2026 06:13:09 +0000 (0:00:00.163) 0:22:07.785 ******* 2026-02-08 06:13:10.670155 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.670166 | orchestrator | 2026-02-08 06:13:10.670176 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:13:10.670188 | orchestrator | Sunday 08 February 2026 06:13:09 +0000 (0:00:00.152) 0:22:07.937 ******* 2026-02-08 06:13:10.670199 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.670210 | orchestrator | 2026-02-08 06:13:10.670221 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:13:10.670232 | orchestrator | Sunday 08 February 2026 06:13:10 +0000 (0:00:00.155) 0:22:08.093 ******* 2026-02-08 06:13:10.670242 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.670253 | orchestrator | 2026-02-08 06:13:10.670272 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:13:10.670284 | orchestrator | Sunday 08 February 2026 06:13:10 +0000 (0:00:00.155) 0:22:08.249 ******* 2026-02-08 06:13:10.670294 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.670304 | orchestrator | 2026-02-08 06:13:10.670313 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:13:10.670323 | orchestrator | Sunday 08 February 2026 06:13:10 +0000 (0:00:00.152) 0:22:08.401 ******* 2026-02-08 06:13:10.670333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:10.670346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:10.670365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:10.670377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:13:10.670388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:10.670399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:10.670416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:10.994215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e566a5b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:13:10.994350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:10.994371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:10.994392 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:10.994413 | orchestrator | 2026-02-08 06:13:10.994433 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:13:10.994452 | orchestrator | Sunday 08 February 2026 06:13:10 +0000 (0:00:00.307) 0:22:08.708 ******* 2026-02-08 06:13:10.994474 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:10.994519 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:10.994550 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:10.994571 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:10.994601 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:10.994620 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:10.994638 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:10.994684 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3e566a5b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e566a5b-bb0c-4ece-9641-6f7efc673353-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:37.434850 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:37.435056 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:37.435080 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:37.435096 | orchestrator | 2026-02-08 06:13:37.435130 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:13:37.435147 | orchestrator | Sunday 08 February 2026 06:13:10 +0000 (0:00:00.320) 0:22:09.029 ******* 2026-02-08 06:13:37.435162 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:37.435176 | orchestrator | 2026-02-08 06:13:37.435189 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:13:37.435204 | orchestrator | Sunday 08 February 2026 06:13:11 +0000 (0:00:00.563) 0:22:09.592 ******* 2026-02-08 06:13:37.435217 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:37.435230 | orchestrator | 2026-02-08 06:13:37.435243 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:13:37.435257 | orchestrator | Sunday 08 February 2026 06:13:11 +0000 (0:00:00.172) 0:22:09.765 ******* 2026-02-08 06:13:37.435270 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:37.435283 | orchestrator | 2026-02-08 06:13:37.435296 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:13:37.435310 | orchestrator | Sunday 08 February 2026 06:13:12 +0000 (0:00:00.511) 0:22:10.277 ******* 2026-02-08 06:13:37.435324 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:37.435339 | orchestrator | 2026-02-08 06:13:37.435352 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:13:37.435361 | orchestrator | Sunday 08 February 2026 06:13:12 +0000 (0:00:00.153) 0:22:10.430 ******* 2026-02-08 06:13:37.435369 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:37.435378 | orchestrator | 2026-02-08 06:13:37.435386 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:13:37.435394 | orchestrator | Sunday 08 February 2026 06:13:12 +0000 (0:00:00.252) 0:22:10.683 ******* 2026-02-08 06:13:37.435426 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:37.435436 | orchestrator | 2026-02-08 06:13:37.435445 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:13:37.435455 | orchestrator | Sunday 08 February 2026 06:13:13 +0000 (0:00:00.514) 0:22:11.197 ******* 2026-02-08 06:13:37.435464 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:13:37.435474 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-08 06:13:37.435483 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-08 06:13:37.435492 | orchestrator | 2026-02-08 06:13:37.435501 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:13:37.435524 | orchestrator | Sunday 08 February 2026 06:13:13 +0000 (0:00:00.750) 0:22:11.948 ******* 2026-02-08 06:13:37.435534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-08 06:13:37.435544 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-08 06:13:37.435553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-08 06:13:37.435562 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:37.435571 | orchestrator | 2026-02-08 06:13:37.435580 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:13:37.435590 | orchestrator | Sunday 08 February 2026 06:13:14 +0000 (0:00:00.211) 0:22:12.159 ******* 2026-02-08 06:13:37.435599 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:13:37.435609 | orchestrator | 2026-02-08 06:13:37.435619 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:13:37.435628 | orchestrator | Sunday 08 February 2026 06:13:14 +0000 (0:00:00.165) 0:22:12.325 ******* 2026-02-08 06:13:37.435637 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:13:37.435646 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:13:37.435656 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:13:37.435665 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:13:37.435675 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:13:37.435685 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:13:37.435712 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:13:37.435722 | orchestrator | 2026-02-08 06:13:37.435732 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:13:37.435741 | orchestrator | Sunday 08 February 2026 06:13:15 +0000 (0:00:00.967) 0:22:13.292 ******* 2026-02-08 06:13:37.435750 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-08 06:13:37.435760 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:13:37.435769 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:13:37.435777 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:13:37.435785 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:13:37.435793 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:13:37.435801 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:13:37.435809 | orchestrator | 2026-02-08 06:13:37.435817 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-08 06:13:37.435825 | orchestrator | Sunday 08 February 2026 06:13:17 +0000 (0:00:01.776) 0:22:15.069 ******* 2026-02-08 06:13:37.435833 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:37.435841 | orchestrator | 2026-02-08 06:13:37.435849 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-08 06:13:37.435856 | orchestrator | Sunday 08 February 2026 06:13:19 +0000 (0:00:02.192) 0:22:17.261 ******* 2026-02-08 06:13:37.435870 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:37.435878 | orchestrator | 2026-02-08 06:13:37.435922 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-08 06:13:37.435931 | orchestrator | Sunday 08 February 2026 06:13:21 +0000 (0:00:02.026) 0:22:19.287 ******* 2026-02-08 06:13:37.435939 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:37.435947 | orchestrator | 2026-02-08 06:13:37.435955 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-08 06:13:37.435963 | orchestrator | Sunday 08 February 2026 06:13:22 +0000 (0:00:01.163) 0:22:20.451 ******* 2026-02-08 06:13:37.435973 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4770', 'value': {'gid': 4770, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 3, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.15:6817/1954054039', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 1954054039}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 1954054039}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-08 06:13:37.435983 | orchestrator | 2026-02-08 06:13:37.435991 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-08 06:13:37.435999 | orchestrator | Sunday 08 February 2026 06:13:22 +0000 (0:00:00.188) 0:22:20.639 ******* 2026-02-08 06:13:37.436007 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-08 06:13:37.436015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-08 06:13:37.436023 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-02-08 06:13:37.436031 | orchestrator | 2026-02-08 06:13:37.436039 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-08 06:13:37.436051 | orchestrator | Sunday 08 February 2026 06:13:23 +0000 (0:00:00.903) 0:22:21.543 ******* 2026-02-08 06:13:37.436059 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-02-08 06:13:37.436067 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-02-08 06:13:37.436075 | orchestrator | 2026-02-08 06:13:37.436083 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-08 06:13:37.436091 | orchestrator | Sunday 08 February 2026 06:13:24 +0000 (0:00:00.884) 0:22:22.427 ******* 2026-02-08 06:13:37.436099 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:13:37.436107 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:13:37.436115 | orchestrator | 2026-02-08 06:13:37.436122 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-08 06:13:37.436130 | orchestrator | Sunday 08 February 2026 06:13:33 +0000 (0:00:09.225) 0:22:31.653 ******* 2026-02-08 06:13:37.436138 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:13:37.436146 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:13:37.436154 | orchestrator | 2026-02-08 06:13:37.436162 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-08 06:13:37.436169 | orchestrator | Sunday 08 February 2026 06:13:36 +0000 (0:00:02.705) 0:22:34.358 ******* 2026-02-08 06:13:37.436177 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:13:37.436185 | orchestrator | 2026-02-08 06:13:37.436193 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-08 06:13:37.436207 | orchestrator | Sunday 08 February 2026 06:13:37 +0000 (0:00:01.109) 0:22:35.468 ******* 2026-02-08 06:13:45.110497 | orchestrator | changed: [testbed-node-0] 2026-02-08 06:13:45.110613 | orchestrator | 2026-02-08 06:13:45.110630 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-08 06:13:45.110643 | orchestrator | 2026-02-08 06:13:45.110654 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:13:45.110665 | orchestrator | Sunday 08 February 2026 06:13:38 +0000 (0:00:00.820) 0:22:36.289 ******* 2026-02-08 06:13:45.110677 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-08 06:13:45.110687 | orchestrator | 2026-02-08 06:13:45.110698 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:13:45.110709 | orchestrator | Sunday 08 February 2026 06:13:38 +0000 (0:00:00.249) 0:22:36.538 ******* 2026-02-08 06:13:45.110720 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:45.110733 | orchestrator | 2026-02-08 06:13:45.110744 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:13:45.110756 | orchestrator | Sunday 08 February 2026 06:13:38 +0000 (0:00:00.461) 0:22:37.000 ******* 2026-02-08 06:13:45.110766 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:45.110777 | orchestrator | 2026-02-08 06:13:45.110788 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:13:45.110799 | orchestrator | Sunday 08 February 2026 06:13:39 +0000 (0:00:00.178) 0:22:37.178 ******* 2026-02-08 06:13:45.110810 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:45.110821 | orchestrator | 2026-02-08 06:13:45.110831 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:13:45.110842 | orchestrator | Sunday 08 February 2026 06:13:39 +0000 (0:00:00.456) 0:22:37.634 ******* 2026-02-08 06:13:45.110853 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:45.110863 | orchestrator | 2026-02-08 06:13:45.110874 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:13:45.110885 | orchestrator | Sunday 08 February 2026 06:13:39 +0000 (0:00:00.156) 0:22:37.791 ******* 2026-02-08 06:13:45.110931 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:45.110943 | orchestrator | 2026-02-08 06:13:45.110954 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:13:45.110965 | orchestrator | Sunday 08 February 2026 06:13:39 +0000 (0:00:00.149) 0:22:37.940 ******* 2026-02-08 06:13:45.110975 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:45.110986 | orchestrator | 2026-02-08 06:13:45.110997 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:13:45.111009 | orchestrator | Sunday 08 February 2026 06:13:40 +0000 (0:00:00.175) 0:22:38.116 ******* 2026-02-08 06:13:45.111022 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:45.111035 | orchestrator | 2026-02-08 06:13:45.111049 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:13:45.111061 | orchestrator | Sunday 08 February 2026 06:13:40 +0000 (0:00:00.495) 0:22:38.611 ******* 2026-02-08 06:13:45.111074 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:45.111087 | orchestrator | 2026-02-08 06:13:45.111099 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:13:45.111112 | orchestrator | Sunday 08 February 2026 06:13:40 +0000 (0:00:00.145) 0:22:38.757 ******* 2026-02-08 06:13:45.111125 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:13:45.111137 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:13:45.111150 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:13:45.111164 | orchestrator | 2026-02-08 06:13:45.111176 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:13:45.111190 | orchestrator | Sunday 08 February 2026 06:13:41 +0000 (0:00:00.703) 0:22:39.460 ******* 2026-02-08 06:13:45.111203 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:45.111215 | orchestrator | 2026-02-08 06:13:45.111228 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:13:45.111264 | orchestrator | Sunday 08 February 2026 06:13:41 +0000 (0:00:00.268) 0:22:39.729 ******* 2026-02-08 06:13:45.111277 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:13:45.111305 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:13:45.111318 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:13:45.111330 | orchestrator | 2026-02-08 06:13:45.111343 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:13:45.111356 | orchestrator | Sunday 08 February 2026 06:13:43 +0000 (0:00:01.916) 0:22:41.646 ******* 2026-02-08 06:13:45.111369 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 06:13:45.111383 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 06:13:45.111393 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 06:13:45.111404 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:45.111415 | orchestrator | 2026-02-08 06:13:45.111426 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:13:45.111437 | orchestrator | Sunday 08 February 2026 06:13:44 +0000 (0:00:00.435) 0:22:42.081 ******* 2026-02-08 06:13:45.111449 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:13:45.111464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:13:45.111494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:13:45.111506 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:45.111518 | orchestrator | 2026-02-08 06:13:45.111529 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:13:45.111540 | orchestrator | Sunday 08 February 2026 06:13:44 +0000 (0:00:00.666) 0:22:42.748 ******* 2026-02-08 06:13:45.111553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:45.111567 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:45.111579 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:45.111590 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:45.111601 | orchestrator | 2026-02-08 06:13:45.111612 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:13:45.111631 | orchestrator | Sunday 08 February 2026 06:13:44 +0000 (0:00:00.171) 0:22:42.920 ******* 2026-02-08 06:13:45.111644 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:13:42.235051', 'end': '2026-02-08 06:13:42.281227', 'delta': '0:00:00.046176', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:13:45.111666 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:13:42.799649', 'end': '2026-02-08 06:13:42.848049', 'delta': '0:00:00.048400', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:13:45.111685 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:13:43.386165', 'end': '2026-02-08 06:13:43.435294', 'delta': '0:00:00.049129', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:13:49.357389 | orchestrator | 2026-02-08 06:13:49.357498 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:13:49.357514 | orchestrator | Sunday 08 February 2026 06:13:45 +0000 (0:00:00.234) 0:22:43.154 ******* 2026-02-08 06:13:49.357526 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:49.357538 | orchestrator | 2026-02-08 06:13:49.357550 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:13:49.357561 | orchestrator | Sunday 08 February 2026 06:13:45 +0000 (0:00:00.289) 0:22:43.444 ******* 2026-02-08 06:13:49.357572 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:49.357583 | orchestrator | 2026-02-08 06:13:49.357594 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:13:49.357605 | orchestrator | Sunday 08 February 2026 06:13:45 +0000 (0:00:00.273) 0:22:43.717 ******* 2026-02-08 06:13:49.357616 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:49.357627 | orchestrator | 2026-02-08 06:13:49.357638 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:13:49.357648 | orchestrator | Sunday 08 February 2026 06:13:45 +0000 (0:00:00.163) 0:22:43.880 ******* 2026-02-08 06:13:49.357659 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:13:49.357670 | orchestrator | 2026-02-08 06:13:49.357681 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:13:49.357692 | orchestrator | Sunday 08 February 2026 06:13:47 +0000 (0:00:01.330) 0:22:45.211 ******* 2026-02-08 06:13:49.357703 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:49.357714 | orchestrator | 2026-02-08 06:13:49.357749 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:13:49.357761 | orchestrator | Sunday 08 February 2026 06:13:47 +0000 (0:00:00.142) 0:22:45.353 ******* 2026-02-08 06:13:49.357772 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:49.357783 | orchestrator | 2026-02-08 06:13:49.357794 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:13:49.357804 | orchestrator | Sunday 08 February 2026 06:13:47 +0000 (0:00:00.488) 0:22:45.842 ******* 2026-02-08 06:13:49.357815 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:49.357826 | orchestrator | 2026-02-08 06:13:49.357837 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:13:49.357847 | orchestrator | Sunday 08 February 2026 06:13:48 +0000 (0:00:00.239) 0:22:46.081 ******* 2026-02-08 06:13:49.357858 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:49.357869 | orchestrator | 2026-02-08 06:13:49.357880 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:13:49.357925 | orchestrator | Sunday 08 February 2026 06:13:48 +0000 (0:00:00.124) 0:22:46.206 ******* 2026-02-08 06:13:49.357945 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:49.357964 | orchestrator | 2026-02-08 06:13:49.357983 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:13:49.358001 | orchestrator | Sunday 08 February 2026 06:13:48 +0000 (0:00:00.119) 0:22:46.326 ******* 2026-02-08 06:13:49.358102 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:49.358127 | orchestrator | 2026-02-08 06:13:49.358147 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:13:49.358167 | orchestrator | Sunday 08 February 2026 06:13:48 +0000 (0:00:00.184) 0:22:46.510 ******* 2026-02-08 06:13:49.358187 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:49.358206 | orchestrator | 2026-02-08 06:13:49.358225 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:13:49.358245 | orchestrator | Sunday 08 February 2026 06:13:48 +0000 (0:00:00.127) 0:22:46.637 ******* 2026-02-08 06:13:49.358264 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:49.358282 | orchestrator | 2026-02-08 06:13:49.358296 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:13:49.358308 | orchestrator | Sunday 08 February 2026 06:13:48 +0000 (0:00:00.186) 0:22:46.823 ******* 2026-02-08 06:13:49.358318 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:49.358329 | orchestrator | 2026-02-08 06:13:49.358339 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:13:49.358369 | orchestrator | Sunday 08 February 2026 06:13:48 +0000 (0:00:00.134) 0:22:46.958 ******* 2026-02-08 06:13:49.358406 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:49.358418 | orchestrator | 2026-02-08 06:13:49.358429 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:13:49.358439 | orchestrator | Sunday 08 February 2026 06:13:49 +0000 (0:00:00.229) 0:22:47.188 ******* 2026-02-08 06:13:49.358452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:49.358489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'uuids': ['1cd382bb-e631-457f-8e23-7bd2a48df188'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx']}})  2026-02-08 06:13:49.358515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '380fccde', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:13:49.358528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a']}})  2026-02-08 06:13:49.358540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:49.358552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:49.358570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:13:49.358583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:49.358594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM', 'dm-uuid-CRYPT-LUKS2-acd8aeb3a91948899ba0cb5b1d4bdfb1-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:13:49.358629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:49.738292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'uuids': ['acd8aeb3-a919-4889-9ba0-cb5b1d4bdfb1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM']}})  2026-02-08 06:13:49.738414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307']}})  2026-02-08 06:13:49.738433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:49.738471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b6d2541', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:13:49.738529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:49.738543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:13:49.738556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx', 'dm-uuid-CRYPT-LUKS2-1cd382bbe631457f8e237bd2a48df188-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:13:49.738569 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:49.738582 | orchestrator | 2026-02-08 06:13:49.738627 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:13:49.738640 | orchestrator | Sunday 08 February 2026 06:13:49 +0000 (0:00:00.360) 0:22:47.548 ******* 2026-02-08 06:13:49.738653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.738687 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'uuids': ['1cd382bb-e631-457f-8e23-7bd2a48df188'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.738709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '380fccde', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.738731 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918483 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918624 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM', 'dm-uuid-CRYPT-LUKS2-acd8aeb3a91948899ba0cb5b1d4bdfb1-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918690 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918722 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'uuids': ['acd8aeb3-a919-4889-9ba0-cb5b1d4bdfb1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:49.918787 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b6d2541', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:59.233807 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:59.233948 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:59.233962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx', 'dm-uuid-CRYPT-LUKS2-1cd382bbe631457f8e237bd2a48df188-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:13:59.233988 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.233997 | orchestrator | 2026-02-08 06:13:59.234006 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:13:59.234055 | orchestrator | Sunday 08 February 2026 06:13:49 +0000 (0:00:00.411) 0:22:47.959 ******* 2026-02-08 06:13:59.234063 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:59.234072 | orchestrator | 2026-02-08 06:13:59.234079 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:13:59.234088 | orchestrator | Sunday 08 February 2026 06:13:50 +0000 (0:00:00.494) 0:22:48.454 ******* 2026-02-08 06:13:59.234092 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:59.234099 | orchestrator | 2026-02-08 06:13:59.234106 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:13:59.234113 | orchestrator | Sunday 08 February 2026 06:13:50 +0000 (0:00:00.131) 0:22:48.585 ******* 2026-02-08 06:13:59.234120 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:59.234127 | orchestrator | 2026-02-08 06:13:59.234133 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:13:59.234138 | orchestrator | Sunday 08 February 2026 06:13:51 +0000 (0:00:00.841) 0:22:49.427 ******* 2026-02-08 06:13:59.234142 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234146 | orchestrator | 2026-02-08 06:13:59.234151 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:13:59.234155 | orchestrator | Sunday 08 February 2026 06:13:51 +0000 (0:00:00.158) 0:22:49.585 ******* 2026-02-08 06:13:59.234160 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234164 | orchestrator | 2026-02-08 06:13:59.234169 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:13:59.234173 | orchestrator | Sunday 08 February 2026 06:13:51 +0000 (0:00:00.250) 0:22:49.836 ******* 2026-02-08 06:13:59.234177 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234182 | orchestrator | 2026-02-08 06:13:59.234186 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:13:59.234190 | orchestrator | Sunday 08 February 2026 06:13:51 +0000 (0:00:00.169) 0:22:50.006 ******* 2026-02-08 06:13:59.234195 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-08 06:13:59.234200 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-08 06:13:59.234204 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-08 06:13:59.234209 | orchestrator | 2026-02-08 06:13:59.234213 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:13:59.234217 | orchestrator | Sunday 08 February 2026 06:13:52 +0000 (0:00:00.712) 0:22:50.718 ******* 2026-02-08 06:13:59.234222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 06:13:59.234227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 06:13:59.234231 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 06:13:59.234235 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234239 | orchestrator | 2026-02-08 06:13:59.234244 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:13:59.234248 | orchestrator | Sunday 08 February 2026 06:13:52 +0000 (0:00:00.177) 0:22:50.895 ******* 2026-02-08 06:13:59.234265 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-08 06:13:59.234276 | orchestrator | 2026-02-08 06:13:59.234281 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:13:59.234287 | orchestrator | Sunday 08 February 2026 06:13:53 +0000 (0:00:00.239) 0:22:51.135 ******* 2026-02-08 06:13:59.234291 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234295 | orchestrator | 2026-02-08 06:13:59.234300 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:13:59.234304 | orchestrator | Sunday 08 February 2026 06:13:53 +0000 (0:00:00.155) 0:22:51.290 ******* 2026-02-08 06:13:59.234308 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234313 | orchestrator | 2026-02-08 06:13:59.234382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:13:59.234393 | orchestrator | Sunday 08 February 2026 06:13:53 +0000 (0:00:00.146) 0:22:51.437 ******* 2026-02-08 06:13:59.234400 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234407 | orchestrator | 2026-02-08 06:13:59.234414 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:13:59.234419 | orchestrator | Sunday 08 February 2026 06:13:53 +0000 (0:00:00.154) 0:22:51.592 ******* 2026-02-08 06:13:59.234424 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:59.234429 | orchestrator | 2026-02-08 06:13:59.234434 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:13:59.234439 | orchestrator | Sunday 08 February 2026 06:13:53 +0000 (0:00:00.253) 0:22:51.845 ******* 2026-02-08 06:13:59.234447 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:13:59.234452 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:13:59.234457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:13:59.234462 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234467 | orchestrator | 2026-02-08 06:13:59.234472 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:13:59.234477 | orchestrator | Sunday 08 February 2026 06:13:54 +0000 (0:00:00.724) 0:22:52.569 ******* 2026-02-08 06:13:59.234482 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:13:59.234487 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:13:59.234492 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:13:59.234497 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234503 | orchestrator | 2026-02-08 06:13:59.234508 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:13:59.234513 | orchestrator | Sunday 08 February 2026 06:13:55 +0000 (0:00:01.145) 0:22:53.715 ******* 2026-02-08 06:13:59.234517 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:13:59.234521 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:13:59.234526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:13:59.234530 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:13:59.234534 | orchestrator | 2026-02-08 06:13:59.234538 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:13:59.234543 | orchestrator | Sunday 08 February 2026 06:13:56 +0000 (0:00:00.448) 0:22:54.164 ******* 2026-02-08 06:13:59.234547 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:13:59.234551 | orchestrator | 2026-02-08 06:13:59.234555 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:13:59.234560 | orchestrator | Sunday 08 February 2026 06:13:56 +0000 (0:00:00.173) 0:22:54.338 ******* 2026-02-08 06:13:59.234564 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 06:13:59.234568 | orchestrator | 2026-02-08 06:13:59.234572 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:13:59.234577 | orchestrator | Sunday 08 February 2026 06:13:56 +0000 (0:00:00.358) 0:22:54.696 ******* 2026-02-08 06:13:59.234581 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:13:59.234590 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:13:59.234594 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:13:59.234598 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:13:59.234603 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:13:59.234607 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-08 06:13:59.234611 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:13:59.234616 | orchestrator | 2026-02-08 06:13:59.234620 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:13:59.234624 | orchestrator | Sunday 08 February 2026 06:13:57 +0000 (0:00:00.853) 0:22:55.549 ******* 2026-02-08 06:13:59.234629 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:13:59.234633 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:13:59.234637 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:13:59.234641 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:13:59.234646 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:13:59.234650 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-08 06:13:59.234654 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:13:59.234659 | orchestrator | 2026-02-08 06:13:59.234667 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-08 06:14:10.826152 | orchestrator | Sunday 08 February 2026 06:13:59 +0000 (0:00:01.725) 0:22:57.275 ******* 2026-02-08 06:14:10.826272 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.826293 | orchestrator | 2026-02-08 06:14:10.826310 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:14:10.826326 | orchestrator | Sunday 08 February 2026 06:13:59 +0000 (0:00:00.146) 0:22:57.421 ******* 2026-02-08 06:14:10.826342 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-08 06:14:10.826357 | orchestrator | 2026-02-08 06:14:10.826388 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:14:10.826402 | orchestrator | Sunday 08 February 2026 06:13:59 +0000 (0:00:00.203) 0:22:57.625 ******* 2026-02-08 06:14:10.826428 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-08 06:14:10.826444 | orchestrator | 2026-02-08 06:14:10.826459 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:14:10.826474 | orchestrator | Sunday 08 February 2026 06:13:59 +0000 (0:00:00.211) 0:22:57.837 ******* 2026-02-08 06:14:10.826489 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.826504 | orchestrator | 2026-02-08 06:14:10.826519 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:14:10.826535 | orchestrator | Sunday 08 February 2026 06:13:59 +0000 (0:00:00.138) 0:22:57.975 ******* 2026-02-08 06:14:10.826549 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.826564 | orchestrator | 2026-02-08 06:14:10.826598 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:14:10.826614 | orchestrator | Sunday 08 February 2026 06:14:00 +0000 (0:00:00.871) 0:22:58.848 ******* 2026-02-08 06:14:10.826630 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.826647 | orchestrator | 2026-02-08 06:14:10.826661 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:14:10.826677 | orchestrator | Sunday 08 February 2026 06:14:01 +0000 (0:00:00.539) 0:22:59.387 ******* 2026-02-08 06:14:10.826693 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.826736 | orchestrator | 2026-02-08 06:14:10.826755 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:14:10.826770 | orchestrator | Sunday 08 February 2026 06:14:01 +0000 (0:00:00.552) 0:22:59.939 ******* 2026-02-08 06:14:10.826786 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.826801 | orchestrator | 2026-02-08 06:14:10.826818 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:14:10.826833 | orchestrator | Sunday 08 February 2026 06:14:02 +0000 (0:00:00.150) 0:23:00.090 ******* 2026-02-08 06:14:10.826848 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.826891 | orchestrator | 2026-02-08 06:14:10.826927 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:14:10.826944 | orchestrator | Sunday 08 February 2026 06:14:02 +0000 (0:00:00.154) 0:23:00.244 ******* 2026-02-08 06:14:10.826959 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.826975 | orchestrator | 2026-02-08 06:14:10.826989 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:14:10.827003 | orchestrator | Sunday 08 February 2026 06:14:02 +0000 (0:00:00.130) 0:23:00.375 ******* 2026-02-08 06:14:10.827018 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.827031 | orchestrator | 2026-02-08 06:14:10.827045 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:14:10.827059 | orchestrator | Sunday 08 February 2026 06:14:02 +0000 (0:00:00.537) 0:23:00.913 ******* 2026-02-08 06:14:10.827075 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.827089 | orchestrator | 2026-02-08 06:14:10.827104 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:14:10.827118 | orchestrator | Sunday 08 February 2026 06:14:03 +0000 (0:00:00.535) 0:23:01.448 ******* 2026-02-08 06:14:10.827132 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827145 | orchestrator | 2026-02-08 06:14:10.827161 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:14:10.827176 | orchestrator | Sunday 08 February 2026 06:14:03 +0000 (0:00:00.138) 0:23:01.587 ******* 2026-02-08 06:14:10.827190 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827205 | orchestrator | 2026-02-08 06:14:10.827218 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:14:10.827232 | orchestrator | Sunday 08 February 2026 06:14:03 +0000 (0:00:00.137) 0:23:01.724 ******* 2026-02-08 06:14:10.827246 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.827261 | orchestrator | 2026-02-08 06:14:10.827275 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:14:10.827291 | orchestrator | Sunday 08 February 2026 06:14:03 +0000 (0:00:00.174) 0:23:01.898 ******* 2026-02-08 06:14:10.827304 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.827319 | orchestrator | 2026-02-08 06:14:10.827335 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:14:10.827350 | orchestrator | Sunday 08 February 2026 06:14:04 +0000 (0:00:00.156) 0:23:02.055 ******* 2026-02-08 06:14:10.827363 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.827377 | orchestrator | 2026-02-08 06:14:10.827392 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:14:10.827408 | orchestrator | Sunday 08 February 2026 06:14:04 +0000 (0:00:00.146) 0:23:02.202 ******* 2026-02-08 06:14:10.827423 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827436 | orchestrator | 2026-02-08 06:14:10.827452 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:14:10.827466 | orchestrator | Sunday 08 February 2026 06:14:04 +0000 (0:00:00.470) 0:23:02.673 ******* 2026-02-08 06:14:10.827481 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827495 | orchestrator | 2026-02-08 06:14:10.827510 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:14:10.827526 | orchestrator | Sunday 08 February 2026 06:14:04 +0000 (0:00:00.134) 0:23:02.807 ******* 2026-02-08 06:14:10.827540 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827567 | orchestrator | 2026-02-08 06:14:10.827607 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:14:10.827623 | orchestrator | Sunday 08 February 2026 06:14:04 +0000 (0:00:00.142) 0:23:02.950 ******* 2026-02-08 06:14:10.827635 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.827649 | orchestrator | 2026-02-08 06:14:10.827662 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:14:10.827675 | orchestrator | Sunday 08 February 2026 06:14:05 +0000 (0:00:00.193) 0:23:03.143 ******* 2026-02-08 06:14:10.827687 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.827700 | orchestrator | 2026-02-08 06:14:10.827712 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:14:10.827725 | orchestrator | Sunday 08 February 2026 06:14:05 +0000 (0:00:00.233) 0:23:03.376 ******* 2026-02-08 06:14:10.827738 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827751 | orchestrator | 2026-02-08 06:14:10.827763 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:14:10.827776 | orchestrator | Sunday 08 February 2026 06:14:05 +0000 (0:00:00.136) 0:23:03.513 ******* 2026-02-08 06:14:10.827789 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827802 | orchestrator | 2026-02-08 06:14:10.827815 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:14:10.827828 | orchestrator | Sunday 08 February 2026 06:14:05 +0000 (0:00:00.124) 0:23:03.638 ******* 2026-02-08 06:14:10.827840 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827854 | orchestrator | 2026-02-08 06:14:10.827868 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:14:10.827881 | orchestrator | Sunday 08 February 2026 06:14:05 +0000 (0:00:00.123) 0:23:03.761 ******* 2026-02-08 06:14:10.827938 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827948 | orchestrator | 2026-02-08 06:14:10.827956 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:14:10.827964 | orchestrator | Sunday 08 February 2026 06:14:05 +0000 (0:00:00.142) 0:23:03.904 ******* 2026-02-08 06:14:10.827972 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.827980 | orchestrator | 2026-02-08 06:14:10.827988 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:14:10.827995 | orchestrator | Sunday 08 February 2026 06:14:05 +0000 (0:00:00.141) 0:23:04.046 ******* 2026-02-08 06:14:10.828003 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.828011 | orchestrator | 2026-02-08 06:14:10.828019 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:14:10.828027 | orchestrator | Sunday 08 February 2026 06:14:06 +0000 (0:00:00.127) 0:23:04.173 ******* 2026-02-08 06:14:10.828034 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.828043 | orchestrator | 2026-02-08 06:14:10.828051 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:14:10.828060 | orchestrator | Sunday 08 February 2026 06:14:06 +0000 (0:00:00.130) 0:23:04.304 ******* 2026-02-08 06:14:10.828068 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.828076 | orchestrator | 2026-02-08 06:14:10.828083 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:14:10.828091 | orchestrator | Sunday 08 February 2026 06:14:06 +0000 (0:00:00.122) 0:23:04.427 ******* 2026-02-08 06:14:10.828099 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.828107 | orchestrator | 2026-02-08 06:14:10.828115 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:14:10.828122 | orchestrator | Sunday 08 February 2026 06:14:06 +0000 (0:00:00.447) 0:23:04.874 ******* 2026-02-08 06:14:10.828130 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.828138 | orchestrator | 2026-02-08 06:14:10.828146 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:14:10.828154 | orchestrator | Sunday 08 February 2026 06:14:06 +0000 (0:00:00.145) 0:23:05.020 ******* 2026-02-08 06:14:10.828170 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.828178 | orchestrator | 2026-02-08 06:14:10.828186 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:14:10.828194 | orchestrator | Sunday 08 February 2026 06:14:07 +0000 (0:00:00.125) 0:23:05.145 ******* 2026-02-08 06:14:10.828201 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.828209 | orchestrator | 2026-02-08 06:14:10.828217 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:14:10.828225 | orchestrator | Sunday 08 February 2026 06:14:07 +0000 (0:00:00.212) 0:23:05.358 ******* 2026-02-08 06:14:10.828233 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.828241 | orchestrator | 2026-02-08 06:14:10.828248 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:14:10.828256 | orchestrator | Sunday 08 February 2026 06:14:08 +0000 (0:00:00.992) 0:23:06.350 ******* 2026-02-08 06:14:10.828264 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:10.828272 | orchestrator | 2026-02-08 06:14:10.828280 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:14:10.828288 | orchestrator | Sunday 08 February 2026 06:14:09 +0000 (0:00:01.206) 0:23:07.557 ******* 2026-02-08 06:14:10.828295 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-08 06:14:10.828304 | orchestrator | 2026-02-08 06:14:10.828311 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:14:10.828319 | orchestrator | Sunday 08 February 2026 06:14:09 +0000 (0:00:00.211) 0:23:07.768 ******* 2026-02-08 06:14:10.828327 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.828335 | orchestrator | 2026-02-08 06:14:10.828343 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:14:10.828350 | orchestrator | Sunday 08 February 2026 06:14:09 +0000 (0:00:00.132) 0:23:07.901 ******* 2026-02-08 06:14:10.828358 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:10.828366 | orchestrator | 2026-02-08 06:14:10.828374 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:14:10.828382 | orchestrator | Sunday 08 February 2026 06:14:09 +0000 (0:00:00.142) 0:23:08.043 ******* 2026-02-08 06:14:10.828390 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:14:10.828407 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:14:25.970997 | orchestrator | 2026-02-08 06:14:25.971142 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:14:25.971170 | orchestrator | Sunday 08 February 2026 06:14:10 +0000 (0:00:00.820) 0:23:08.863 ******* 2026-02-08 06:14:25.971183 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:25.971194 | orchestrator | 2026-02-08 06:14:25.971207 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:14:25.971222 | orchestrator | Sunday 08 February 2026 06:14:11 +0000 (0:00:00.469) 0:23:09.333 ******* 2026-02-08 06:14:25.971240 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.971258 | orchestrator | 2026-02-08 06:14:25.971275 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:14:25.971291 | orchestrator | Sunday 08 February 2026 06:14:11 +0000 (0:00:00.472) 0:23:09.805 ******* 2026-02-08 06:14:25.971315 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.971333 | orchestrator | 2026-02-08 06:14:25.971349 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:14:25.971364 | orchestrator | Sunday 08 February 2026 06:14:11 +0000 (0:00:00.175) 0:23:09.981 ******* 2026-02-08 06:14:25.971381 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.971397 | orchestrator | 2026-02-08 06:14:25.971413 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:14:25.971429 | orchestrator | Sunday 08 February 2026 06:14:12 +0000 (0:00:00.150) 0:23:10.131 ******* 2026-02-08 06:14:25.971464 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-08 06:14:25.971509 | orchestrator | 2026-02-08 06:14:25.971527 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:14:25.971543 | orchestrator | Sunday 08 February 2026 06:14:12 +0000 (0:00:00.199) 0:23:10.331 ******* 2026-02-08 06:14:25.971559 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:25.971576 | orchestrator | 2026-02-08 06:14:25.971594 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:14:25.971611 | orchestrator | Sunday 08 February 2026 06:14:12 +0000 (0:00:00.703) 0:23:11.034 ******* 2026-02-08 06:14:25.971627 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:14:25.971645 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:14:25.971661 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:14:25.971677 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.971695 | orchestrator | 2026-02-08 06:14:25.971712 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:14:25.971728 | orchestrator | Sunday 08 February 2026 06:14:13 +0000 (0:00:00.168) 0:23:11.203 ******* 2026-02-08 06:14:25.971745 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.971762 | orchestrator | 2026-02-08 06:14:25.971778 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:14:25.971793 | orchestrator | Sunday 08 February 2026 06:14:13 +0000 (0:00:00.142) 0:23:11.345 ******* 2026-02-08 06:14:25.971808 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.971824 | orchestrator | 2026-02-08 06:14:25.971838 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:14:25.971851 | orchestrator | Sunday 08 February 2026 06:14:13 +0000 (0:00:00.167) 0:23:11.512 ******* 2026-02-08 06:14:25.971867 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.971925 | orchestrator | 2026-02-08 06:14:25.971942 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:14:25.971958 | orchestrator | Sunday 08 February 2026 06:14:13 +0000 (0:00:00.154) 0:23:11.666 ******* 2026-02-08 06:14:25.971974 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.971991 | orchestrator | 2026-02-08 06:14:25.972008 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:14:25.972024 | orchestrator | Sunday 08 February 2026 06:14:13 +0000 (0:00:00.170) 0:23:11.837 ******* 2026-02-08 06:14:25.972039 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.972055 | orchestrator | 2026-02-08 06:14:25.972071 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:14:25.972086 | orchestrator | Sunday 08 February 2026 06:14:13 +0000 (0:00:00.165) 0:23:12.003 ******* 2026-02-08 06:14:25.972102 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:25.972119 | orchestrator | 2026-02-08 06:14:25.972135 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:14:25.972151 | orchestrator | Sunday 08 February 2026 06:14:15 +0000 (0:00:01.459) 0:23:13.462 ******* 2026-02-08 06:14:25.972162 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:25.972175 | orchestrator | 2026-02-08 06:14:25.972191 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:14:25.972207 | orchestrator | Sunday 08 February 2026 06:14:15 +0000 (0:00:00.476) 0:23:13.938 ******* 2026-02-08 06:14:25.972224 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-08 06:14:25.972241 | orchestrator | 2026-02-08 06:14:25.972258 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:14:25.972274 | orchestrator | Sunday 08 February 2026 06:14:16 +0000 (0:00:00.234) 0:23:14.173 ******* 2026-02-08 06:14:25.972290 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.972307 | orchestrator | 2026-02-08 06:14:25.972324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:14:25.972341 | orchestrator | Sunday 08 February 2026 06:14:16 +0000 (0:00:00.130) 0:23:14.304 ******* 2026-02-08 06:14:25.972378 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.972391 | orchestrator | 2026-02-08 06:14:25.972400 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:14:25.972410 | orchestrator | Sunday 08 February 2026 06:14:16 +0000 (0:00:00.152) 0:23:14.456 ******* 2026-02-08 06:14:25.972419 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.972429 | orchestrator | 2026-02-08 06:14:25.972439 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:14:25.972472 | orchestrator | Sunday 08 February 2026 06:14:16 +0000 (0:00:00.135) 0:23:14.592 ******* 2026-02-08 06:14:25.972483 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.972493 | orchestrator | 2026-02-08 06:14:25.972503 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:14:25.972512 | orchestrator | Sunday 08 February 2026 06:14:16 +0000 (0:00:00.147) 0:23:14.739 ******* 2026-02-08 06:14:25.972522 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.972532 | orchestrator | 2026-02-08 06:14:25.972542 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:14:25.972551 | orchestrator | Sunday 08 February 2026 06:14:16 +0000 (0:00:00.162) 0:23:14.902 ******* 2026-02-08 06:14:25.972561 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.972570 | orchestrator | 2026-02-08 06:14:25.972580 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:14:25.972596 | orchestrator | Sunday 08 February 2026 06:14:17 +0000 (0:00:00.198) 0:23:15.100 ******* 2026-02-08 06:14:25.972612 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.972628 | orchestrator | 2026-02-08 06:14:25.972643 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:14:25.972658 | orchestrator | Sunday 08 February 2026 06:14:17 +0000 (0:00:00.157) 0:23:15.258 ******* 2026-02-08 06:14:25.972673 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.972690 | orchestrator | 2026-02-08 06:14:25.972705 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:14:25.972720 | orchestrator | Sunday 08 February 2026 06:14:17 +0000 (0:00:00.162) 0:23:15.420 ******* 2026-02-08 06:14:25.972747 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:25.972763 | orchestrator | 2026-02-08 06:14:25.972779 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:14:25.972794 | orchestrator | Sunday 08 February 2026 06:14:17 +0000 (0:00:00.230) 0:23:15.651 ******* 2026-02-08 06:14:25.972810 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-08 06:14:25.972828 | orchestrator | 2026-02-08 06:14:25.972846 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:14:25.972863 | orchestrator | Sunday 08 February 2026 06:14:18 +0000 (0:00:00.533) 0:23:16.184 ******* 2026-02-08 06:14:25.972880 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-08 06:14:25.972896 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-08 06:14:25.972955 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-08 06:14:25.972970 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-08 06:14:25.972985 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-08 06:14:25.973001 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-08 06:14:25.973016 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-08 06:14:25.973032 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:14:25.973048 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:14:25.973064 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:14:25.973080 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:14:25.973096 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:14:25.973112 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:14:25.973142 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:14:25.973158 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-08 06:14:25.973175 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-08 06:14:25.973192 | orchestrator | 2026-02-08 06:14:25.973209 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:14:25.973226 | orchestrator | Sunday 08 February 2026 06:14:23 +0000 (0:00:05.538) 0:23:21.722 ******* 2026-02-08 06:14:25.973242 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-08 06:14:25.973258 | orchestrator | 2026-02-08 06:14:25.973274 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-08 06:14:25.973289 | orchestrator | Sunday 08 February 2026 06:14:23 +0000 (0:00:00.225) 0:23:21.947 ******* 2026-02-08 06:14:25.973304 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:14:25.973323 | orchestrator | 2026-02-08 06:14:25.973339 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-08 06:14:25.973356 | orchestrator | Sunday 08 February 2026 06:14:24 +0000 (0:00:00.622) 0:23:22.569 ******* 2026-02-08 06:14:25.973373 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:14:25.973391 | orchestrator | 2026-02-08 06:14:25.973407 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:14:25.973423 | orchestrator | Sunday 08 February 2026 06:14:25 +0000 (0:00:00.993) 0:23:23.563 ******* 2026-02-08 06:14:25.973439 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.973455 | orchestrator | 2026-02-08 06:14:25.973472 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:14:25.973489 | orchestrator | Sunday 08 February 2026 06:14:25 +0000 (0:00:00.158) 0:23:23.722 ******* 2026-02-08 06:14:25.973505 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.973521 | orchestrator | 2026-02-08 06:14:25.973535 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:14:25.973551 | orchestrator | Sunday 08 February 2026 06:14:25 +0000 (0:00:00.147) 0:23:23.870 ******* 2026-02-08 06:14:25.973568 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:25.973584 | orchestrator | 2026-02-08 06:14:25.973601 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:14:25.973639 | orchestrator | Sunday 08 February 2026 06:14:25 +0000 (0:00:00.136) 0:23:24.006 ******* 2026-02-08 06:14:47.937438 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937527 | orchestrator | 2026-02-08 06:14:47.937536 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:14:47.937544 | orchestrator | Sunday 08 February 2026 06:14:26 +0000 (0:00:00.144) 0:23:24.150 ******* 2026-02-08 06:14:47.937550 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937556 | orchestrator | 2026-02-08 06:14:47.937562 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:14:47.937569 | orchestrator | Sunday 08 February 2026 06:14:26 +0000 (0:00:00.159) 0:23:24.309 ******* 2026-02-08 06:14:47.937575 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937593 | orchestrator | 2026-02-08 06:14:47.937599 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:14:47.937605 | orchestrator | Sunday 08 February 2026 06:14:26 +0000 (0:00:00.153) 0:23:24.463 ******* 2026-02-08 06:14:47.937611 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937617 | orchestrator | 2026-02-08 06:14:47.937630 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:14:47.937636 | orchestrator | Sunday 08 February 2026 06:14:26 +0000 (0:00:00.506) 0:23:24.970 ******* 2026-02-08 06:14:47.937658 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937664 | orchestrator | 2026-02-08 06:14:47.937681 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:14:47.937688 | orchestrator | Sunday 08 February 2026 06:14:27 +0000 (0:00:00.165) 0:23:25.136 ******* 2026-02-08 06:14:47.937693 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937699 | orchestrator | 2026-02-08 06:14:47.937705 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:14:47.937711 | orchestrator | Sunday 08 February 2026 06:14:27 +0000 (0:00:00.154) 0:23:25.290 ******* 2026-02-08 06:14:47.937717 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937722 | orchestrator | 2026-02-08 06:14:47.937729 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:14:47.937735 | orchestrator | Sunday 08 February 2026 06:14:27 +0000 (0:00:00.138) 0:23:25.429 ******* 2026-02-08 06:14:47.937741 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937746 | orchestrator | 2026-02-08 06:14:47.937752 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:14:47.937758 | orchestrator | Sunday 08 February 2026 06:14:27 +0000 (0:00:00.165) 0:23:25.594 ******* 2026-02-08 06:14:47.937764 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-08 06:14:47.937770 | orchestrator | 2026-02-08 06:14:47.937776 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:14:47.937781 | orchestrator | Sunday 08 February 2026 06:14:31 +0000 (0:00:03.680) 0:23:29.275 ******* 2026-02-08 06:14:47.937787 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:14:47.937794 | orchestrator | 2026-02-08 06:14:47.937800 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:14:47.937806 | orchestrator | Sunday 08 February 2026 06:14:31 +0000 (0:00:00.207) 0:23:29.483 ******* 2026-02-08 06:14:47.937814 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-08 06:14:47.937823 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-08 06:14:47.937830 | orchestrator | 2026-02-08 06:14:47.937836 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:14:47.937842 | orchestrator | Sunday 08 February 2026 06:14:35 +0000 (0:00:03.849) 0:23:33.332 ******* 2026-02-08 06:14:47.937847 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937853 | orchestrator | 2026-02-08 06:14:47.937859 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:14:47.937865 | orchestrator | Sunday 08 February 2026 06:14:35 +0000 (0:00:00.153) 0:23:33.485 ******* 2026-02-08 06:14:47.937870 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937876 | orchestrator | 2026-02-08 06:14:47.937882 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:14:47.937888 | orchestrator | Sunday 08 February 2026 06:14:35 +0000 (0:00:00.155) 0:23:33.641 ******* 2026-02-08 06:14:47.937894 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.937899 | orchestrator | 2026-02-08 06:14:47.937905 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:14:47.937945 | orchestrator | Sunday 08 February 2026 06:14:35 +0000 (0:00:00.146) 0:23:33.788 ******* 2026-02-08 06:14:47.938011 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.938069 | orchestrator | 2026-02-08 06:14:47.938077 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:14:47.938083 | orchestrator | Sunday 08 February 2026 06:14:35 +0000 (0:00:00.163) 0:23:33.951 ******* 2026-02-08 06:14:47.938090 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.938097 | orchestrator | 2026-02-08 06:14:47.938104 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:14:47.938124 | orchestrator | Sunday 08 February 2026 06:14:36 +0000 (0:00:00.208) 0:23:34.160 ******* 2026-02-08 06:14:47.938131 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:47.938138 | orchestrator | 2026-02-08 06:14:47.938144 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:14:47.938150 | orchestrator | Sunday 08 February 2026 06:14:36 +0000 (0:00:00.229) 0:23:34.390 ******* 2026-02-08 06:14:47.938156 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:14:47.938162 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:14:47.938168 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:14:47.938174 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.938180 | orchestrator | 2026-02-08 06:14:47.938186 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:14:47.938192 | orchestrator | Sunday 08 February 2026 06:14:37 +0000 (0:00:01.221) 0:23:35.612 ******* 2026-02-08 06:14:47.938197 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:14:47.938203 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:14:47.938209 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:14:47.938215 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.938221 | orchestrator | 2026-02-08 06:14:47.938227 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:14:47.938237 | orchestrator | Sunday 08 February 2026 06:14:38 +0000 (0:00:00.478) 0:23:36.090 ******* 2026-02-08 06:14:47.938243 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:14:47.938249 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:14:47.938255 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:14:47.938261 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.938267 | orchestrator | 2026-02-08 06:14:47.938273 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:14:47.938279 | orchestrator | Sunday 08 February 2026 06:14:38 +0000 (0:00:00.492) 0:23:36.583 ******* 2026-02-08 06:14:47.938284 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:47.938290 | orchestrator | 2026-02-08 06:14:47.938296 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:14:47.938302 | orchestrator | Sunday 08 February 2026 06:14:38 +0000 (0:00:00.230) 0:23:36.814 ******* 2026-02-08 06:14:47.938308 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 06:14:47.938314 | orchestrator | 2026-02-08 06:14:47.938320 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:14:47.938325 | orchestrator | Sunday 08 February 2026 06:14:39 +0000 (0:00:00.535) 0:23:37.350 ******* 2026-02-08 06:14:47.938331 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:47.938337 | orchestrator | 2026-02-08 06:14:47.938343 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-08 06:14:47.938349 | orchestrator | Sunday 08 February 2026 06:14:40 +0000 (0:00:00.884) 0:23:38.234 ******* 2026-02-08 06:14:47.938355 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.938361 | orchestrator | 2026-02-08 06:14:47.938366 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-08 06:14:47.938372 | orchestrator | Sunday 08 February 2026 06:14:40 +0000 (0:00:00.151) 0:23:38.386 ******* 2026-02-08 06:14:47.938378 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-02-08 06:14:47.938384 | orchestrator | 2026-02-08 06:14:47.938394 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-08 06:14:47.938400 | orchestrator | Sunday 08 February 2026 06:14:40 +0000 (0:00:00.641) 0:23:39.028 ******* 2026-02-08 06:14:47.938406 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-08 06:14:47.938412 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-08 06:14:47.938418 | orchestrator | 2026-02-08 06:14:47.938424 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-08 06:14:47.938430 | orchestrator | Sunday 08 February 2026 06:14:41 +0000 (0:00:00.823) 0:23:39.851 ******* 2026-02-08 06:14:47.938435 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:14:47.938442 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-08 06:14:47.938452 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:14:47.938461 | orchestrator | 2026-02-08 06:14:47.938471 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:14:47.938480 | orchestrator | Sunday 08 February 2026 06:14:44 +0000 (0:00:02.965) 0:23:42.817 ******* 2026-02-08 06:14:47.938490 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-08 06:14:47.938499 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-08 06:14:47.938508 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:47.938517 | orchestrator | 2026-02-08 06:14:47.938526 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-08 06:14:47.938535 | orchestrator | Sunday 08 February 2026 06:14:46 +0000 (0:00:01.242) 0:23:44.059 ******* 2026-02-08 06:14:47.938545 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:14:47.938554 | orchestrator | 2026-02-08 06:14:47.938564 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-08 06:14:47.938573 | orchestrator | Sunday 08 February 2026 06:14:46 +0000 (0:00:00.495) 0:23:44.555 ******* 2026-02-08 06:14:47.938582 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:14:47.938591 | orchestrator | 2026-02-08 06:14:47.938600 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-08 06:14:47.938610 | orchestrator | Sunday 08 February 2026 06:14:46 +0000 (0:00:00.143) 0:23:44.698 ******* 2026-02-08 06:14:47.938619 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-02-08 06:14:47.938630 | orchestrator | 2026-02-08 06:14:47.938640 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-08 06:14:47.938650 | orchestrator | Sunday 08 February 2026 06:14:47 +0000 (0:00:00.626) 0:23:45.325 ******* 2026-02-08 06:14:47.938664 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-02-08 06:15:11.779102 | orchestrator | 2026-02-08 06:15:11.779217 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-08 06:15:11.779234 | orchestrator | Sunday 08 February 2026 06:14:47 +0000 (0:00:00.647) 0:23:45.972 ******* 2026-02-08 06:15:11.779247 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:15:11.779260 | orchestrator | 2026-02-08 06:15:11.779272 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-08 06:15:11.779283 | orchestrator | Sunday 08 February 2026 06:14:48 +0000 (0:00:01.036) 0:23:47.009 ******* 2026-02-08 06:15:11.779294 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:15:11.779304 | orchestrator | 2026-02-08 06:15:11.779315 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-08 06:15:11.779326 | orchestrator | Sunday 08 February 2026 06:14:49 +0000 (0:00:00.904) 0:23:47.914 ******* 2026-02-08 06:15:11.779337 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:15:11.779348 | orchestrator | 2026-02-08 06:15:11.779359 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-08 06:15:11.779369 | orchestrator | Sunday 08 February 2026 06:14:51 +0000 (0:00:01.274) 0:23:49.188 ******* 2026-02-08 06:15:11.779380 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:15:11.779391 | orchestrator | 2026-02-08 06:15:11.779424 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-08 06:15:11.779449 | orchestrator | Sunday 08 February 2026 06:14:52 +0000 (0:00:01.232) 0:23:50.421 ******* 2026-02-08 06:15:11.779460 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:15:11.779471 | orchestrator | 2026-02-08 06:15:11.779481 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-08 06:15:11.779492 | orchestrator | Sunday 08 February 2026 06:14:53 +0000 (0:00:00.780) 0:23:51.201 ******* 2026-02-08 06:15:11.779503 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:15:11.779514 | orchestrator | 2026-02-08 06:15:11.779525 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-08 06:15:11.779535 | orchestrator | Sunday 08 February 2026 06:14:53 +0000 (0:00:00.453) 0:23:51.655 ******* 2026-02-08 06:15:11.779546 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:15:11.779557 | orchestrator | 2026-02-08 06:15:11.779567 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-08 06:15:11.779578 | orchestrator | 2026-02-08 06:15:11.779589 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:15:11.779599 | orchestrator | Sunday 08 February 2026 06:15:02 +0000 (0:00:08.840) 0:24:00.496 ******* 2026-02-08 06:15:11.779610 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4 2026-02-08 06:15:11.779621 | orchestrator | 2026-02-08 06:15:11.779633 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:15:11.779645 | orchestrator | Sunday 08 February 2026 06:15:02 +0000 (0:00:00.422) 0:24:00.919 ******* 2026-02-08 06:15:11.779658 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:11.779671 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:11.779683 | orchestrator | 2026-02-08 06:15:11.779695 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:15:11.779708 | orchestrator | Sunday 08 February 2026 06:15:03 +0000 (0:00:00.560) 0:24:01.479 ******* 2026-02-08 06:15:11.779720 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:11.779734 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:11.779746 | orchestrator | 2026-02-08 06:15:11.779758 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:15:11.779771 | orchestrator | Sunday 08 February 2026 06:15:03 +0000 (0:00:00.267) 0:24:01.747 ******* 2026-02-08 06:15:11.779783 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:11.779795 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:11.779808 | orchestrator | 2026-02-08 06:15:11.779820 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:15:11.779833 | orchestrator | Sunday 08 February 2026 06:15:04 +0000 (0:00:00.906) 0:24:02.653 ******* 2026-02-08 06:15:11.779845 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:11.779858 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:11.779870 | orchestrator | 2026-02-08 06:15:11.779883 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:15:11.779896 | orchestrator | Sunday 08 February 2026 06:15:04 +0000 (0:00:00.259) 0:24:02.913 ******* 2026-02-08 06:15:11.779909 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:11.779942 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:11.779955 | orchestrator | 2026-02-08 06:15:11.779967 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:15:11.779980 | orchestrator | Sunday 08 February 2026 06:15:05 +0000 (0:00:00.253) 0:24:03.167 ******* 2026-02-08 06:15:11.779992 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:11.780003 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:11.780014 | orchestrator | 2026-02-08 06:15:11.780025 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:15:11.780035 | orchestrator | Sunday 08 February 2026 06:15:05 +0000 (0:00:00.326) 0:24:03.493 ******* 2026-02-08 06:15:11.780046 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:11.780057 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:11.780068 | orchestrator | 2026-02-08 06:15:11.780079 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:15:11.780097 | orchestrator | Sunday 08 February 2026 06:15:05 +0000 (0:00:00.256) 0:24:03.749 ******* 2026-02-08 06:15:11.780108 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:11.780119 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:11.780129 | orchestrator | 2026-02-08 06:15:11.780140 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:15:11.780151 | orchestrator | Sunday 08 February 2026 06:15:05 +0000 (0:00:00.233) 0:24:03.983 ******* 2026-02-08 06:15:11.780162 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:15:11.780172 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:15:11.780183 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:15:11.780194 | orchestrator | 2026-02-08 06:15:11.780205 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:15:11.780233 | orchestrator | Sunday 08 February 2026 06:15:07 +0000 (0:00:01.068) 0:24:05.051 ******* 2026-02-08 06:15:11.780245 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:11.780256 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:11.780266 | orchestrator | 2026-02-08 06:15:11.780278 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:15:11.780288 | orchestrator | Sunday 08 February 2026 06:15:08 +0000 (0:00:01.298) 0:24:06.350 ******* 2026-02-08 06:15:11.780300 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:15:11.780311 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:15:11.780322 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:15:11.780332 | orchestrator | 2026-02-08 06:15:11.780343 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:15:11.780354 | orchestrator | Sunday 08 February 2026 06:15:10 +0000 (0:00:01.918) 0:24:08.269 ******* 2026-02-08 06:15:11.780365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 06:15:11.780377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 06:15:11.780388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 06:15:11.780404 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:11.780415 | orchestrator | 2026-02-08 06:15:11.780426 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:15:11.780437 | orchestrator | Sunday 08 February 2026 06:15:10 +0000 (0:00:00.481) 0:24:08.751 ******* 2026-02-08 06:15:11.780449 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:15:11.780464 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:15:11.780475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:15:11.780487 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:11.780498 | orchestrator | 2026-02-08 06:15:11.780509 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:15:11.780520 | orchestrator | Sunday 08 February 2026 06:15:11 +0000 (0:00:00.640) 0:24:09.391 ******* 2026-02-08 06:15:11.780532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:11.780554 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:11.780566 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:11.780578 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:11.780589 | orchestrator | 2026-02-08 06:15:11.780600 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:15:11.780611 | orchestrator | Sunday 08 February 2026 06:15:11 +0000 (0:00:00.192) 0:24:09.584 ******* 2026-02-08 06:15:11.780632 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:15:08.856474', 'end': '2026-02-08 06:15:08.906275', 'delta': '0:00:00.049801', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:15:17.584986 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:15:09.452212', 'end': '2026-02-08 06:15:09.496577', 'delta': '0:00:00.044365', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:15:17.585115 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:15:10.006109', 'end': '2026-02-08 06:15:10.066432', 'delta': '0:00:00.060323', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:15:17.585134 | orchestrator | 2026-02-08 06:15:17.585147 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:15:17.585160 | orchestrator | Sunday 08 February 2026 06:15:11 +0000 (0:00:00.234) 0:24:09.818 ******* 2026-02-08 06:15:17.585171 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:17.585207 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:17.585219 | orchestrator | 2026-02-08 06:15:17.585230 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:15:17.585242 | orchestrator | Sunday 08 February 2026 06:15:12 +0000 (0:00:00.362) 0:24:10.181 ******* 2026-02-08 06:15:17.585253 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:17.585264 | orchestrator | 2026-02-08 06:15:17.585275 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:15:17.585286 | orchestrator | Sunday 08 February 2026 06:15:12 +0000 (0:00:00.249) 0:24:10.430 ******* 2026-02-08 06:15:17.585296 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:17.585307 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:17.585318 | orchestrator | 2026-02-08 06:15:17.585329 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:15:17.585339 | orchestrator | Sunday 08 February 2026 06:15:12 +0000 (0:00:00.249) 0:24:10.679 ******* 2026-02-08 06:15:17.585350 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:15:17.585362 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:15:17.585372 | orchestrator | 2026-02-08 06:15:17.585384 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:15:17.585395 | orchestrator | Sunday 08 February 2026 06:15:14 +0000 (0:00:01.497) 0:24:12.177 ******* 2026-02-08 06:15:17.585406 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:17.585416 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:17.585427 | orchestrator | 2026-02-08 06:15:17.585438 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:15:17.585448 | orchestrator | Sunday 08 February 2026 06:15:14 +0000 (0:00:00.664) 0:24:12.842 ******* 2026-02-08 06:15:17.585459 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:17.585470 | orchestrator | 2026-02-08 06:15:17.585481 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:15:17.585492 | orchestrator | Sunday 08 February 2026 06:15:14 +0000 (0:00:00.164) 0:24:13.007 ******* 2026-02-08 06:15:17.585502 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:17.585513 | orchestrator | 2026-02-08 06:15:17.585524 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:15:17.585535 | orchestrator | Sunday 08 February 2026 06:15:15 +0000 (0:00:00.252) 0:24:13.260 ******* 2026-02-08 06:15:17.585546 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:17.585557 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:17.585567 | orchestrator | 2026-02-08 06:15:17.585578 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:15:17.585589 | orchestrator | Sunday 08 February 2026 06:15:15 +0000 (0:00:00.267) 0:24:13.527 ******* 2026-02-08 06:15:17.585600 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:17.585611 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:17.585621 | orchestrator | 2026-02-08 06:15:17.585632 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:15:17.585643 | orchestrator | Sunday 08 February 2026 06:15:15 +0000 (0:00:00.241) 0:24:13.769 ******* 2026-02-08 06:15:17.585654 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:17.585665 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:17.585675 | orchestrator | 2026-02-08 06:15:17.585686 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:15:17.585697 | orchestrator | Sunday 08 February 2026 06:15:16 +0000 (0:00:00.294) 0:24:14.064 ******* 2026-02-08 06:15:17.585708 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:17.585719 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:17.585730 | orchestrator | 2026-02-08 06:15:17.585759 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:15:17.585771 | orchestrator | Sunday 08 February 2026 06:15:16 +0000 (0:00:00.252) 0:24:14.316 ******* 2026-02-08 06:15:17.585782 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:17.585793 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:17.585811 | orchestrator | 2026-02-08 06:15:17.585823 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:15:17.585834 | orchestrator | Sunday 08 February 2026 06:15:16 +0000 (0:00:00.245) 0:24:14.562 ******* 2026-02-08 06:15:17.585844 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:17.585855 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:17.585867 | orchestrator | 2026-02-08 06:15:17.585877 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:15:17.585889 | orchestrator | Sunday 08 February 2026 06:15:17 +0000 (0:00:00.567) 0:24:15.129 ******* 2026-02-08 06:15:17.585900 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:17.585911 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:17.585952 | orchestrator | 2026-02-08 06:15:17.585963 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:15:17.585980 | orchestrator | Sunday 08 February 2026 06:15:17 +0000 (0:00:00.277) 0:24:15.406 ******* 2026-02-08 06:15:17.585993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.586007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'uuids': ['f792a4bc-313c-4001-af18-36958a398c99'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH']}})  2026-02-08 06:15:17.586071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b3b2ead', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:15:17.586085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4']}})  2026-02-08 06:15:17.586097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.586126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.681593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:15:17.681703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.681721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.681734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc', 'dm-uuid-CRYPT-LUKS2-b54d8ff93c624789820c72a73c143a76-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:15:17.681746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'uuids': ['4e6e3c31-d8d4-42a4-8244-dd9f1ae0f37a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm']}})  2026-02-08 06:15:17.681759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.681771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'uuids': ['b54d8ff9-3c62-4789-820c-72a73c143a76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc']}})  2026-02-08 06:15:17.681832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '33bf36ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:15:17.681854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757']}})  2026-02-08 06:15:17.681869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc']}})  2026-02-08 06:15:17.681881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.681904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8eb95c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:15:17.854861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.854991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.855007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.855016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.855026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH', 'dm-uuid-CRYPT-LUKS2-f792a4bc313c4001af1836958a398c99-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:15:17.855037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:15:17.855065 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:17.855076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.855084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH', 'dm-uuid-CRYPT-LUKS2-d4dc753534094cc38b724c68e8894d37-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:15:17.855113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.855124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'uuids': ['d4dc7535-3409-4cc3-8b72-4c68e8894d37'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH']}})  2026-02-08 06:15:17.855133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046']}})  2026-02-08 06:15:17.855142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:17.855159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6152c601', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:15:18.086180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:18.086258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:15:18.086268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm', 'dm-uuid-CRYPT-LUKS2-4e6e3c31d8d442a48244dd9f1ae0f37a-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:15:18.086276 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:18.086284 | orchestrator | 2026-02-08 06:15:18.086291 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:15:18.086297 | orchestrator | Sunday 08 February 2026 06:15:17 +0000 (0:00:00.487) 0:24:15.894 ******* 2026-02-08 06:15:18.086304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.086334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'uuids': ['f792a4bc-313c-4001-af18-36958a398c99'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.086342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b3b2ead', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.086372 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.086381 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.086388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.086399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.086406 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.086418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc', 'dm-uuid-CRYPT-LUKS2-b54d8ff93c624789820c72a73c143a76-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.155152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.155269 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.155296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'uuids': ['b54d8ff9-3c62-4789-820c-72a73c143a76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.155360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'uuids': ['4e6e3c31-d8d4-42a4-8244-dd9f1ae0f37a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.155390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.155425 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '33bf36ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.155439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.155458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.155485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8eb95c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270346 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270384 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270416 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270448 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH', 'dm-uuid-CRYPT-LUKS2-f792a4bc313c4001af1836958a398c99-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270482 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270506 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:18.270516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH', 'dm-uuid-CRYPT-LUKS2-d4dc753534094cc38b724c68e8894d37-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270534 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'uuids': ['d4dc7535-3409-4cc3-8b72-4c68e8894d37'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:18.270566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:24.450476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6152c601', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:24.450620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:24.450655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:24.450686 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm', 'dm-uuid-CRYPT-LUKS2-4e6e3c31d8d442a48244dd9f1ae0f37a-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:15:24.450708 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:24.450723 | orchestrator | 2026-02-08 06:15:24.450735 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:15:24.450747 | orchestrator | Sunday 08 February 2026 06:15:18 +0000 (0:00:00.565) 0:24:16.459 ******* 2026-02-08 06:15:24.450759 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:24.450771 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:24.450781 | orchestrator | 2026-02-08 06:15:24.450793 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:15:24.450804 | orchestrator | Sunday 08 February 2026 06:15:19 +0000 (0:00:00.661) 0:24:17.121 ******* 2026-02-08 06:15:24.450815 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:24.450826 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:24.450837 | orchestrator | 2026-02-08 06:15:24.450848 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:15:24.450859 | orchestrator | Sunday 08 February 2026 06:15:19 +0000 (0:00:00.233) 0:24:17.355 ******* 2026-02-08 06:15:24.450869 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:24.450880 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:24.450891 | orchestrator | 2026-02-08 06:15:24.450902 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:15:24.450912 | orchestrator | Sunday 08 February 2026 06:15:20 +0000 (0:00:00.906) 0:24:18.261 ******* 2026-02-08 06:15:24.450961 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:24.450975 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:24.450986 | orchestrator | 2026-02-08 06:15:24.450997 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:15:24.451008 | orchestrator | Sunday 08 February 2026 06:15:20 +0000 (0:00:00.239) 0:24:18.500 ******* 2026-02-08 06:15:24.451019 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:24.451030 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:24.451041 | orchestrator | 2026-02-08 06:15:24.451052 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:15:24.451062 | orchestrator | Sunday 08 February 2026 06:15:20 +0000 (0:00:00.370) 0:24:18.871 ******* 2026-02-08 06:15:24.451073 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:24.451084 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:24.451095 | orchestrator | 2026-02-08 06:15:24.451106 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:15:24.451117 | orchestrator | Sunday 08 February 2026 06:15:21 +0000 (0:00:00.288) 0:24:19.159 ******* 2026-02-08 06:15:24.451128 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-08 06:15:24.451139 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-08 06:15:24.451150 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-08 06:15:24.451161 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-08 06:15:24.451171 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-08 06:15:24.451182 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-08 06:15:24.451193 | orchestrator | 2026-02-08 06:15:24.451204 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:15:24.451214 | orchestrator | Sunday 08 February 2026 06:15:22 +0000 (0:00:01.115) 0:24:20.275 ******* 2026-02-08 06:15:24.451226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 06:15:24.451237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 06:15:24.451248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 06:15:24.451259 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:24.451269 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-08 06:15:24.451289 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-08 06:15:24.451300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-08 06:15:24.451311 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:24.451321 | orchestrator | 2026-02-08 06:15:24.451332 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:15:24.451343 | orchestrator | Sunday 08 February 2026 06:15:22 +0000 (0:00:00.283) 0:24:20.558 ******* 2026-02-08 06:15:24.451355 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4 2026-02-08 06:15:24.451366 | orchestrator | 2026-02-08 06:15:24.451384 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:15:24.451396 | orchestrator | Sunday 08 February 2026 06:15:23 +0000 (0:00:00.748) 0:24:21.307 ******* 2026-02-08 06:15:24.451407 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:24.451418 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:24.451429 | orchestrator | 2026-02-08 06:15:24.451440 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:15:24.451451 | orchestrator | Sunday 08 February 2026 06:15:23 +0000 (0:00:00.316) 0:24:21.623 ******* 2026-02-08 06:15:24.451462 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:24.451472 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:24.451483 | orchestrator | 2026-02-08 06:15:24.451494 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:15:24.451505 | orchestrator | Sunday 08 February 2026 06:15:23 +0000 (0:00:00.250) 0:24:21.874 ******* 2026-02-08 06:15:24.451516 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:24.451527 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:24.451537 | orchestrator | 2026-02-08 06:15:24.451550 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:15:24.451569 | orchestrator | Sunday 08 February 2026 06:15:24 +0000 (0:00:00.267) 0:24:22.142 ******* 2026-02-08 06:15:24.451586 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:24.451606 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:24.451622 | orchestrator | 2026-02-08 06:15:24.451650 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:15:40.332889 | orchestrator | Sunday 08 February 2026 06:15:24 +0000 (0:00:00.350) 0:24:22.493 ******* 2026-02-08 06:15:40.333082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:15:40.333100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:15:40.333111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:15:40.333121 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.333132 | orchestrator | 2026-02-08 06:15:40.333143 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:15:40.333154 | orchestrator | Sunday 08 February 2026 06:15:24 +0000 (0:00:00.390) 0:24:22.883 ******* 2026-02-08 06:15:40.333164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:15:40.333174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:15:40.333184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:15:40.333194 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.333204 | orchestrator | 2026-02-08 06:15:40.333214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:15:40.333224 | orchestrator | Sunday 08 February 2026 06:15:25 +0000 (0:00:00.832) 0:24:23.715 ******* 2026-02-08 06:15:40.333234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:15:40.333244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:15:40.333253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:15:40.333263 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.333272 | orchestrator | 2026-02-08 06:15:40.333282 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:15:40.333324 | orchestrator | Sunday 08 February 2026 06:15:26 +0000 (0:00:00.766) 0:24:24.481 ******* 2026-02-08 06:15:40.333335 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.333346 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.333355 | orchestrator | 2026-02-08 06:15:40.333365 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:15:40.333375 | orchestrator | Sunday 08 February 2026 06:15:27 +0000 (0:00:00.623) 0:24:25.105 ******* 2026-02-08 06:15:40.333385 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 06:15:40.333397 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 06:15:40.333408 | orchestrator | 2026-02-08 06:15:40.333419 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:15:40.333430 | orchestrator | Sunday 08 February 2026 06:15:27 +0000 (0:00:00.487) 0:24:25.593 ******* 2026-02-08 06:15:40.333441 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:15:40.333453 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:15:40.333464 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:15:40.333475 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 06:15:40.333486 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:15:40.333498 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:15:40.333510 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:15:40.333521 | orchestrator | 2026-02-08 06:15:40.333533 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:15:40.333545 | orchestrator | Sunday 08 February 2026 06:15:28 +0000 (0:00:00.894) 0:24:26.487 ******* 2026-02-08 06:15:40.333555 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:15:40.333567 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:15:40.333578 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:15:40.333589 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 06:15:40.333600 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:15:40.333611 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:15:40.333640 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:15:40.333652 | orchestrator | 2026-02-08 06:15:40.333664 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-08 06:15:40.333675 | orchestrator | Sunday 08 February 2026 06:15:30 +0000 (0:00:01.846) 0:24:28.334 ******* 2026-02-08 06:15:40.333687 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.333698 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.333709 | orchestrator | 2026-02-08 06:15:40.333720 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:15:40.333731 | orchestrator | Sunday 08 February 2026 06:15:30 +0000 (0:00:00.246) 0:24:28.581 ******* 2026-02-08 06:15:40.333743 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4 2026-02-08 06:15:40.333753 | orchestrator | 2026-02-08 06:15:40.333762 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:15:40.333772 | orchestrator | Sunday 08 February 2026 06:15:30 +0000 (0:00:00.386) 0:24:28.968 ******* 2026-02-08 06:15:40.333782 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4 2026-02-08 06:15:40.333792 | orchestrator | 2026-02-08 06:15:40.333802 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:15:40.333821 | orchestrator | Sunday 08 February 2026 06:15:31 +0000 (0:00:00.732) 0:24:29.700 ******* 2026-02-08 06:15:40.333849 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.333860 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.333870 | orchestrator | 2026-02-08 06:15:40.333880 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:15:40.333890 | orchestrator | Sunday 08 February 2026 06:15:31 +0000 (0:00:00.235) 0:24:29.936 ******* 2026-02-08 06:15:40.333900 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.333909 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.333919 | orchestrator | 2026-02-08 06:15:40.333949 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:15:40.333959 | orchestrator | Sunday 08 February 2026 06:15:32 +0000 (0:00:00.648) 0:24:30.584 ******* 2026-02-08 06:15:40.333969 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.333978 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.333988 | orchestrator | 2026-02-08 06:15:40.333998 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:15:40.334008 | orchestrator | Sunday 08 February 2026 06:15:33 +0000 (0:00:00.655) 0:24:31.240 ******* 2026-02-08 06:15:40.334090 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.334101 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.334111 | orchestrator | 2026-02-08 06:15:40.334121 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:15:40.334131 | orchestrator | Sunday 08 February 2026 06:15:33 +0000 (0:00:00.609) 0:24:31.849 ******* 2026-02-08 06:15:40.334141 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.334150 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.334160 | orchestrator | 2026-02-08 06:15:40.334170 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:15:40.334179 | orchestrator | Sunday 08 February 2026 06:15:34 +0000 (0:00:00.581) 0:24:32.430 ******* 2026-02-08 06:15:40.334189 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.334199 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.334208 | orchestrator | 2026-02-08 06:15:40.334218 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:15:40.334228 | orchestrator | Sunday 08 February 2026 06:15:34 +0000 (0:00:00.262) 0:24:32.693 ******* 2026-02-08 06:15:40.334237 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.334247 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.334257 | orchestrator | 2026-02-08 06:15:40.334266 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:15:40.334276 | orchestrator | Sunday 08 February 2026 06:15:34 +0000 (0:00:00.256) 0:24:32.949 ******* 2026-02-08 06:15:40.334285 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.334295 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.334305 | orchestrator | 2026-02-08 06:15:40.334314 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:15:40.334324 | orchestrator | Sunday 08 February 2026 06:15:36 +0000 (0:00:01.691) 0:24:34.641 ******* 2026-02-08 06:15:40.334333 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.334343 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.334352 | orchestrator | 2026-02-08 06:15:40.334362 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:15:40.334371 | orchestrator | Sunday 08 February 2026 06:15:37 +0000 (0:00:00.656) 0:24:35.297 ******* 2026-02-08 06:15:40.334381 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.334391 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.334401 | orchestrator | 2026-02-08 06:15:40.334410 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:15:40.334420 | orchestrator | Sunday 08 February 2026 06:15:37 +0000 (0:00:00.243) 0:24:35.541 ******* 2026-02-08 06:15:40.334429 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.334439 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.334448 | orchestrator | 2026-02-08 06:15:40.334458 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:15:40.334476 | orchestrator | Sunday 08 February 2026 06:15:37 +0000 (0:00:00.243) 0:24:35.784 ******* 2026-02-08 06:15:40.334485 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.334495 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.334505 | orchestrator | 2026-02-08 06:15:40.334514 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:15:40.334524 | orchestrator | Sunday 08 February 2026 06:15:38 +0000 (0:00:00.622) 0:24:36.406 ******* 2026-02-08 06:15:40.334533 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.334543 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.334553 | orchestrator | 2026-02-08 06:15:40.334562 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:15:40.334572 | orchestrator | Sunday 08 February 2026 06:15:38 +0000 (0:00:00.280) 0:24:36.686 ******* 2026-02-08 06:15:40.334581 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.334597 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.334607 | orchestrator | 2026-02-08 06:15:40.334616 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:15:40.334626 | orchestrator | Sunday 08 February 2026 06:15:38 +0000 (0:00:00.302) 0:24:36.989 ******* 2026-02-08 06:15:40.334636 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.334645 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.334655 | orchestrator | 2026-02-08 06:15:40.334665 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:15:40.334675 | orchestrator | Sunday 08 February 2026 06:15:39 +0000 (0:00:00.264) 0:24:37.254 ******* 2026-02-08 06:15:40.334684 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.334694 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.334704 | orchestrator | 2026-02-08 06:15:40.334713 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:15:40.334723 | orchestrator | Sunday 08 February 2026 06:15:39 +0000 (0:00:00.274) 0:24:37.528 ******* 2026-02-08 06:15:40.334732 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:40.334742 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:40.334751 | orchestrator | 2026-02-08 06:15:40.334761 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:15:40.334771 | orchestrator | Sunday 08 February 2026 06:15:39 +0000 (0:00:00.235) 0:24:37.763 ******* 2026-02-08 06:15:40.334780 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:40.334790 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:40.334799 | orchestrator | 2026-02-08 06:15:40.334809 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:15:40.334826 | orchestrator | Sunday 08 February 2026 06:15:40 +0000 (0:00:00.601) 0:24:38.364 ******* 2026-02-08 06:15:55.207615 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:55.207773 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:55.207796 | orchestrator | 2026-02-08 06:15:55.207815 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:15:55.207831 | orchestrator | Sunday 08 February 2026 06:15:40 +0000 (0:00:00.419) 0:24:38.784 ******* 2026-02-08 06:15:55.207847 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.207865 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.207881 | orchestrator | 2026-02-08 06:15:55.207899 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:15:55.207915 | orchestrator | Sunday 08 February 2026 06:15:40 +0000 (0:00:00.242) 0:24:39.027 ******* 2026-02-08 06:15:55.208009 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208027 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208064 | orchestrator | 2026-02-08 06:15:55.208081 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:15:55.208115 | orchestrator | Sunday 08 February 2026 06:15:41 +0000 (0:00:00.269) 0:24:39.297 ******* 2026-02-08 06:15:55.208133 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208151 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208202 | orchestrator | 2026-02-08 06:15:55.208219 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:15:55.208238 | orchestrator | Sunday 08 February 2026 06:15:41 +0000 (0:00:00.236) 0:24:39.534 ******* 2026-02-08 06:15:55.208256 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208275 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208292 | orchestrator | 2026-02-08 06:15:55.208308 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:15:55.208325 | orchestrator | Sunday 08 February 2026 06:15:41 +0000 (0:00:00.237) 0:24:39.771 ******* 2026-02-08 06:15:55.208341 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208356 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208372 | orchestrator | 2026-02-08 06:15:55.208388 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:15:55.208402 | orchestrator | Sunday 08 February 2026 06:15:42 +0000 (0:00:00.537) 0:24:40.308 ******* 2026-02-08 06:15:55.208417 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208432 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208448 | orchestrator | 2026-02-08 06:15:55.208466 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:15:55.208483 | orchestrator | Sunday 08 February 2026 06:15:42 +0000 (0:00:00.249) 0:24:40.558 ******* 2026-02-08 06:15:55.208499 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208510 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208519 | orchestrator | 2026-02-08 06:15:55.208530 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:15:55.208541 | orchestrator | Sunday 08 February 2026 06:15:42 +0000 (0:00:00.231) 0:24:40.789 ******* 2026-02-08 06:15:55.208550 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208560 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208570 | orchestrator | 2026-02-08 06:15:55.208579 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:15:55.208589 | orchestrator | Sunday 08 February 2026 06:15:42 +0000 (0:00:00.229) 0:24:41.018 ******* 2026-02-08 06:15:55.208598 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208608 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208618 | orchestrator | 2026-02-08 06:15:55.208627 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:15:55.208637 | orchestrator | Sunday 08 February 2026 06:15:43 +0000 (0:00:00.254) 0:24:41.273 ******* 2026-02-08 06:15:55.208647 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208657 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208666 | orchestrator | 2026-02-08 06:15:55.208676 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:15:55.208685 | orchestrator | Sunday 08 February 2026 06:15:43 +0000 (0:00:00.218) 0:24:41.491 ******* 2026-02-08 06:15:55.208695 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208704 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208714 | orchestrator | 2026-02-08 06:15:55.208724 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:15:55.208733 | orchestrator | Sunday 08 February 2026 06:15:43 +0000 (0:00:00.219) 0:24:41.710 ******* 2026-02-08 06:15:55.208743 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.208752 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.208762 | orchestrator | 2026-02-08 06:15:55.208787 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:15:55.208798 | orchestrator | Sunday 08 February 2026 06:15:44 +0000 (0:00:00.734) 0:24:42.444 ******* 2026-02-08 06:15:55.208807 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:55.208817 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:55.208827 | orchestrator | 2026-02-08 06:15:55.208836 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:15:55.208846 | orchestrator | Sunday 08 February 2026 06:15:45 +0000 (0:00:01.060) 0:24:43.505 ******* 2026-02-08 06:15:55.208867 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:55.208877 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:55.208886 | orchestrator | 2026-02-08 06:15:55.208896 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:15:55.208906 | orchestrator | Sunday 08 February 2026 06:15:46 +0000 (0:00:01.300) 0:24:44.806 ******* 2026-02-08 06:15:55.208915 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4 2026-02-08 06:15:55.208950 | orchestrator | 2026-02-08 06:15:55.208969 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:15:55.208981 | orchestrator | Sunday 08 February 2026 06:15:47 +0000 (0:00:00.381) 0:24:45.188 ******* 2026-02-08 06:15:55.208990 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209000 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209010 | orchestrator | 2026-02-08 06:15:55.209020 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:15:55.209029 | orchestrator | Sunday 08 February 2026 06:15:47 +0000 (0:00:00.578) 0:24:45.767 ******* 2026-02-08 06:15:55.209062 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209073 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209082 | orchestrator | 2026-02-08 06:15:55.209092 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:15:55.209101 | orchestrator | Sunday 08 February 2026 06:15:47 +0000 (0:00:00.231) 0:24:45.998 ******* 2026-02-08 06:15:55.209111 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:15:55.209121 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:15:55.209130 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:15:55.209140 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:15:55.209150 | orchestrator | 2026-02-08 06:15:55.209159 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:15:55.209169 | orchestrator | Sunday 08 February 2026 06:15:48 +0000 (0:00:00.939) 0:24:46.938 ******* 2026-02-08 06:15:55.209179 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:55.209189 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:55.209198 | orchestrator | 2026-02-08 06:15:55.209208 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:15:55.209217 | orchestrator | Sunday 08 February 2026 06:15:49 +0000 (0:00:00.596) 0:24:47.535 ******* 2026-02-08 06:15:55.209227 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209236 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209246 | orchestrator | 2026-02-08 06:15:55.209256 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:15:55.209265 | orchestrator | Sunday 08 February 2026 06:15:49 +0000 (0:00:00.272) 0:24:47.807 ******* 2026-02-08 06:15:55.209275 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209284 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209294 | orchestrator | 2026-02-08 06:15:55.209304 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:15:55.209313 | orchestrator | Sunday 08 February 2026 06:15:50 +0000 (0:00:00.264) 0:24:48.072 ******* 2026-02-08 06:15:55.209323 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209333 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209342 | orchestrator | 2026-02-08 06:15:55.209352 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:15:55.209361 | orchestrator | Sunday 08 February 2026 06:15:50 +0000 (0:00:00.232) 0:24:48.304 ******* 2026-02-08 06:15:55.209371 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4 2026-02-08 06:15:55.209381 | orchestrator | 2026-02-08 06:15:55.209390 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:15:55.209400 | orchestrator | Sunday 08 February 2026 06:15:51 +0000 (0:00:00.767) 0:24:49.072 ******* 2026-02-08 06:15:55.209417 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:15:55.209426 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:15:55.209436 | orchestrator | 2026-02-08 06:15:55.209446 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:15:55.209455 | orchestrator | Sunday 08 February 2026 06:15:51 +0000 (0:00:00.775) 0:24:49.847 ******* 2026-02-08 06:15:55.209465 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:15:55.209474 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:15:55.209484 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:15:55.209494 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209504 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:15:55.209513 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:15:55.209523 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:15:55.209532 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209542 | orchestrator | 2026-02-08 06:15:55.209552 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:15:55.209561 | orchestrator | Sunday 08 February 2026 06:15:52 +0000 (0:00:00.245) 0:24:50.093 ******* 2026-02-08 06:15:55.209576 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209587 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209596 | orchestrator | 2026-02-08 06:15:55.209606 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:15:55.209615 | orchestrator | Sunday 08 February 2026 06:15:52 +0000 (0:00:00.270) 0:24:50.363 ******* 2026-02-08 06:15:55.209625 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209634 | orchestrator | 2026-02-08 06:15:55.209644 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:15:55.209654 | orchestrator | Sunday 08 February 2026 06:15:52 +0000 (0:00:00.183) 0:24:50.546 ******* 2026-02-08 06:15:55.209663 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209673 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209683 | orchestrator | 2026-02-08 06:15:55.209692 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:15:55.209702 | orchestrator | Sunday 08 February 2026 06:15:52 +0000 (0:00:00.275) 0:24:50.822 ******* 2026-02-08 06:15:55.209712 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209722 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209731 | orchestrator | 2026-02-08 06:15:55.209741 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:15:55.209751 | orchestrator | Sunday 08 February 2026 06:15:53 +0000 (0:00:00.600) 0:24:51.423 ******* 2026-02-08 06:15:55.209760 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:15:55.209770 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:15:55.209780 | orchestrator | 2026-02-08 06:15:55.209789 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:15:55.209799 | orchestrator | Sunday 08 February 2026 06:15:53 +0000 (0:00:00.287) 0:24:51.711 ******* 2026-02-08 06:15:55.209814 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:09.974422 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:09.974549 | orchestrator | 2026-02-08 06:16:09.974563 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:16:09.974574 | orchestrator | Sunday 08 February 2026 06:15:55 +0000 (0:00:01.532) 0:24:53.243 ******* 2026-02-08 06:16:09.974583 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:09.974592 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:09.974600 | orchestrator | 2026-02-08 06:16:09.974608 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:16:09.974617 | orchestrator | Sunday 08 February 2026 06:15:55 +0000 (0:00:00.286) 0:24:53.530 ******* 2026-02-08 06:16:09.974669 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4 2026-02-08 06:16:09.974681 | orchestrator | 2026-02-08 06:16:09.974690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:16:09.974698 | orchestrator | Sunday 08 February 2026 06:15:55 +0000 (0:00:00.457) 0:24:53.987 ******* 2026-02-08 06:16:09.974706 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.974717 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.974731 | orchestrator | 2026-02-08 06:16:09.974744 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:16:09.974757 | orchestrator | Sunday 08 February 2026 06:15:56 +0000 (0:00:00.261) 0:24:54.249 ******* 2026-02-08 06:16:09.974770 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.974784 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.974796 | orchestrator | 2026-02-08 06:16:09.974810 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:16:09.974823 | orchestrator | Sunday 08 February 2026 06:15:56 +0000 (0:00:00.629) 0:24:54.878 ******* 2026-02-08 06:16:09.974838 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.974851 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.974865 | orchestrator | 2026-02-08 06:16:09.974922 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:16:09.974959 | orchestrator | Sunday 08 February 2026 06:15:57 +0000 (0:00:00.245) 0:24:55.124 ******* 2026-02-08 06:16:09.974973 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.974987 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.974998 | orchestrator | 2026-02-08 06:16:09.975009 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:16:09.975021 | orchestrator | Sunday 08 February 2026 06:15:57 +0000 (0:00:00.282) 0:24:55.406 ******* 2026-02-08 06:16:09.975035 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.975049 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.975060 | orchestrator | 2026-02-08 06:16:09.975071 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:16:09.975083 | orchestrator | Sunday 08 February 2026 06:15:57 +0000 (0:00:00.266) 0:24:55.672 ******* 2026-02-08 06:16:09.975095 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.975108 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.975121 | orchestrator | 2026-02-08 06:16:09.975134 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:16:09.975148 | orchestrator | Sunday 08 February 2026 06:15:57 +0000 (0:00:00.259) 0:24:55.932 ******* 2026-02-08 06:16:09.975161 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.975175 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.975189 | orchestrator | 2026-02-08 06:16:09.975202 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:16:09.975215 | orchestrator | Sunday 08 February 2026 06:15:58 +0000 (0:00:00.288) 0:24:56.221 ******* 2026-02-08 06:16:09.975228 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.975242 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.975256 | orchestrator | 2026-02-08 06:16:09.975270 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:16:09.975284 | orchestrator | Sunday 08 February 2026 06:15:58 +0000 (0:00:00.589) 0:24:56.811 ******* 2026-02-08 06:16:09.975298 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:09.975312 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:09.975321 | orchestrator | 2026-02-08 06:16:09.975329 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:16:09.975337 | orchestrator | Sunday 08 February 2026 06:15:59 +0000 (0:00:00.407) 0:24:57.219 ******* 2026-02-08 06:16:09.975361 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4 2026-02-08 06:16:09.975375 | orchestrator | 2026-02-08 06:16:09.975388 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:16:09.975415 | orchestrator | Sunday 08 February 2026 06:15:59 +0000 (0:00:00.406) 0:24:57.625 ******* 2026-02-08 06:16:09.975428 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-08 06:16:09.975442 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-08 06:16:09.975456 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-08 06:16:09.975470 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-08 06:16:09.975484 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-08 06:16:09.975497 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-08 06:16:09.975509 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-08 06:16:09.975517 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-08 06:16:09.975525 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-08 06:16:09.975536 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-08 06:16:09.975550 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-08 06:16:09.975563 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-08 06:16:09.975575 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-08 06:16:09.975588 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-08 06:16:09.975601 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:16:09.975614 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:16:09.975649 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:16:09.975662 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:16:09.975674 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:16:09.975687 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:16:09.975701 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:16:09.975714 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:16:09.975726 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:16:09.975738 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:16:09.975751 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:16:09.975764 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:16:09.975781 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:16:09.975794 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:16:09.975807 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-08 06:16:09.975821 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-08 06:16:09.975835 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-08 06:16:09.975848 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-08 06:16:09.975861 | orchestrator | 2026-02-08 06:16:09.975875 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:16:09.975889 | orchestrator | Sunday 08 February 2026 06:16:05 +0000 (0:00:05.760) 0:25:03.386 ******* 2026-02-08 06:16:09.975902 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4 2026-02-08 06:16:09.975913 | orchestrator | 2026-02-08 06:16:09.975922 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-08 06:16:09.975969 | orchestrator | Sunday 08 February 2026 06:16:06 +0000 (0:00:00.760) 0:25:04.146 ******* 2026-02-08 06:16:09.975981 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:16:09.975990 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:16:09.975999 | orchestrator | 2026-02-08 06:16:09.976017 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-08 06:16:09.976025 | orchestrator | Sunday 08 February 2026 06:16:06 +0000 (0:00:00.624) 0:25:04.771 ******* 2026-02-08 06:16:09.976033 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:16:09.976105 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:16:09.976114 | orchestrator | 2026-02-08 06:16:09.976122 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:16:09.976135 | orchestrator | Sunday 08 February 2026 06:16:07 +0000 (0:00:01.091) 0:25:05.863 ******* 2026-02-08 06:16:09.976148 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.976161 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.976174 | orchestrator | 2026-02-08 06:16:09.976187 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:16:09.976202 | orchestrator | Sunday 08 February 2026 06:16:08 +0000 (0:00:00.247) 0:25:06.111 ******* 2026-02-08 06:16:09.976216 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.976231 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.976244 | orchestrator | 2026-02-08 06:16:09.976256 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:16:09.976265 | orchestrator | Sunday 08 February 2026 06:16:08 +0000 (0:00:00.260) 0:25:06.371 ******* 2026-02-08 06:16:09.976273 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.976281 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.976288 | orchestrator | 2026-02-08 06:16:09.976304 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:16:09.976312 | orchestrator | Sunday 08 February 2026 06:16:08 +0000 (0:00:00.248) 0:25:06.619 ******* 2026-02-08 06:16:09.976320 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.976328 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.976336 | orchestrator | 2026-02-08 06:16:09.976344 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:16:09.976352 | orchestrator | Sunday 08 February 2026 06:16:08 +0000 (0:00:00.246) 0:25:06.866 ******* 2026-02-08 06:16:09.976360 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.976368 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.976376 | orchestrator | 2026-02-08 06:16:09.976383 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:16:09.976392 | orchestrator | Sunday 08 February 2026 06:16:09 +0000 (0:00:00.640) 0:25:07.506 ******* 2026-02-08 06:16:09.976400 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.976408 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.976416 | orchestrator | 2026-02-08 06:16:09.976424 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:16:09.976432 | orchestrator | Sunday 08 February 2026 06:16:09 +0000 (0:00:00.250) 0:25:07.757 ******* 2026-02-08 06:16:09.976440 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:09.976447 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:09.976457 | orchestrator | 2026-02-08 06:16:09.976470 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:16:09.976504 | orchestrator | Sunday 08 February 2026 06:16:09 +0000 (0:00:00.248) 0:25:08.005 ******* 2026-02-08 06:16:30.424303 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.424401 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.424413 | orchestrator | 2026-02-08 06:16:30.424424 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:16:30.424433 | orchestrator | Sunday 08 February 2026 06:16:10 +0000 (0:00:00.256) 0:25:08.262 ******* 2026-02-08 06:16:30.424442 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.424450 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.424478 | orchestrator | 2026-02-08 06:16:30.424486 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:16:30.424495 | orchestrator | Sunday 08 February 2026 06:16:10 +0000 (0:00:00.230) 0:25:08.492 ******* 2026-02-08 06:16:30.424502 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.424510 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.424518 | orchestrator | 2026-02-08 06:16:30.424526 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:16:30.424534 | orchestrator | Sunday 08 February 2026 06:16:10 +0000 (0:00:00.257) 0:25:08.750 ******* 2026-02-08 06:16:30.424543 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.424551 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.424560 | orchestrator | 2026-02-08 06:16:30.424568 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:16:30.424577 | orchestrator | Sunday 08 February 2026 06:16:11 +0000 (0:00:00.570) 0:25:09.321 ******* 2026-02-08 06:16:30.424586 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-08 06:16:30.424595 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-08 06:16:30.424603 | orchestrator | 2026-02-08 06:16:30.424612 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:16:30.424621 | orchestrator | Sunday 08 February 2026 06:16:14 +0000 (0:00:03.445) 0:25:12.766 ******* 2026-02-08 06:16:30.424629 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:16:30.424640 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:16:30.424648 | orchestrator | 2026-02-08 06:16:30.424657 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:16:30.424665 | orchestrator | Sunday 08 February 2026 06:16:15 +0000 (0:00:00.301) 0:25:13.068 ******* 2026-02-08 06:16:30.424676 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-08 06:16:30.424687 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-08 06:16:30.424697 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-08 06:16:30.424719 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-08 06:16:30.424728 | orchestrator | 2026-02-08 06:16:30.424748 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:16:30.424757 | orchestrator | Sunday 08 February 2026 06:16:18 +0000 (0:00:03.854) 0:25:16.922 ******* 2026-02-08 06:16:30.424766 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.424775 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.424783 | orchestrator | 2026-02-08 06:16:30.424792 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:16:30.424801 | orchestrator | Sunday 08 February 2026 06:16:19 +0000 (0:00:00.279) 0:25:17.202 ******* 2026-02-08 06:16:30.424816 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.424825 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.424834 | orchestrator | 2026-02-08 06:16:30.424843 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:16:30.424853 | orchestrator | Sunday 08 February 2026 06:16:19 +0000 (0:00:00.216) 0:25:17.418 ******* 2026-02-08 06:16:30.424863 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.424873 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.424883 | orchestrator | 2026-02-08 06:16:30.424893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:16:30.424903 | orchestrator | Sunday 08 February 2026 06:16:19 +0000 (0:00:00.278) 0:25:17.697 ******* 2026-02-08 06:16:30.424913 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.424923 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.424934 | orchestrator | 2026-02-08 06:16:30.425027 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:16:30.425039 | orchestrator | Sunday 08 February 2026 06:16:20 +0000 (0:00:00.613) 0:25:18.310 ******* 2026-02-08 06:16:30.425050 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.425061 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.425072 | orchestrator | 2026-02-08 06:16:30.425082 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:16:30.425092 | orchestrator | Sunday 08 February 2026 06:16:20 +0000 (0:00:00.264) 0:25:18.575 ******* 2026-02-08 06:16:30.425102 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:30.425113 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:30.425123 | orchestrator | 2026-02-08 06:16:30.425133 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:16:30.425143 | orchestrator | Sunday 08 February 2026 06:16:20 +0000 (0:00:00.358) 0:25:18.934 ******* 2026-02-08 06:16:30.425154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:16:30.425164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:16:30.425175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:16:30.425185 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.425195 | orchestrator | 2026-02-08 06:16:30.425205 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:16:30.425215 | orchestrator | Sunday 08 February 2026 06:16:21 +0000 (0:00:00.454) 0:25:19.388 ******* 2026-02-08 06:16:30.425224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:16:30.425232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:16:30.425241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:16:30.425250 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.425258 | orchestrator | 2026-02-08 06:16:30.425267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:16:30.425275 | orchestrator | Sunday 08 February 2026 06:16:21 +0000 (0:00:00.435) 0:25:19.824 ******* 2026-02-08 06:16:30.425284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:16:30.425292 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:16:30.425301 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:16:30.425309 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.425318 | orchestrator | 2026-02-08 06:16:30.425326 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:16:30.425335 | orchestrator | Sunday 08 February 2026 06:16:22 +0000 (0:00:00.473) 0:25:20.298 ******* 2026-02-08 06:16:30.425343 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:30.425352 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:30.425360 | orchestrator | 2026-02-08 06:16:30.425369 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:16:30.425384 | orchestrator | Sunday 08 February 2026 06:16:22 +0000 (0:00:00.278) 0:25:20.577 ******* 2026-02-08 06:16:30.425395 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 06:16:30.425409 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 06:16:30.425422 | orchestrator | 2026-02-08 06:16:30.425431 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:16:30.425439 | orchestrator | Sunday 08 February 2026 06:16:23 +0000 (0:00:01.299) 0:25:21.876 ******* 2026-02-08 06:16:30.425448 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:30.425457 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:30.425465 | orchestrator | 2026-02-08 06:16:30.425474 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-08 06:16:30.425482 | orchestrator | Sunday 08 February 2026 06:16:24 +0000 (0:00:01.011) 0:25:22.888 ******* 2026-02-08 06:16:30.425491 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:30.425499 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:30.425508 | orchestrator | 2026-02-08 06:16:30.425517 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-08 06:16:30.425525 | orchestrator | Sunday 08 February 2026 06:16:25 +0000 (0:00:00.228) 0:25:23.117 ******* 2026-02-08 06:16:30.425534 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4 2026-02-08 06:16:30.425543 | orchestrator | 2026-02-08 06:16:30.425557 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-08 06:16:30.425566 | orchestrator | Sunday 08 February 2026 06:16:25 +0000 (0:00:00.406) 0:25:23.524 ******* 2026-02-08 06:16:30.425574 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-08 06:16:30.425583 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-08 06:16:30.425591 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-08 06:16:30.425600 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-08 06:16:30.425608 | orchestrator | 2026-02-08 06:16:30.425617 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-08 06:16:30.425625 | orchestrator | Sunday 08 February 2026 06:16:26 +0000 (0:00:00.919) 0:25:24.443 ******* 2026-02-08 06:16:30.425634 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:16:30.425642 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 06:16:30.425651 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:16:30.425660 | orchestrator | 2026-02-08 06:16:30.425668 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:16:30.425677 | orchestrator | Sunday 08 February 2026 06:16:29 +0000 (0:00:02.998) 0:25:27.441 ******* 2026-02-08 06:16:30.425685 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-08 06:16:30.425694 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 06:16:30.425703 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:30.425712 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-08 06:16:30.425720 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-08 06:16:30.425742 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:48.941142 | orchestrator | 2026-02-08 06:16:48.941281 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-08 06:16:48.941306 | orchestrator | Sunday 08 February 2026 06:16:30 +0000 (0:00:01.011) 0:25:28.453 ******* 2026-02-08 06:16:48.941323 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.941361 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:48.941379 | orchestrator | 2026-02-08 06:16:48.941396 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-08 06:16:48.941412 | orchestrator | Sunday 08 February 2026 06:16:31 +0000 (0:00:00.644) 0:25:29.097 ******* 2026-02-08 06:16:48.941429 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:48.941446 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:16:48.941463 | orchestrator | 2026-02-08 06:16:48.941480 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-08 06:16:48.941528 | orchestrator | Sunday 08 February 2026 06:16:31 +0000 (0:00:00.248) 0:25:29.345 ******* 2026-02-08 06:16:48.941545 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4 2026-02-08 06:16:48.941563 | orchestrator | 2026-02-08 06:16:48.941579 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-08 06:16:48.941595 | orchestrator | Sunday 08 February 2026 06:16:31 +0000 (0:00:00.394) 0:25:29.740 ******* 2026-02-08 06:16:48.941611 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4 2026-02-08 06:16:48.941627 | orchestrator | 2026-02-08 06:16:48.941643 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-08 06:16:48.941660 | orchestrator | Sunday 08 February 2026 06:16:32 +0000 (0:00:00.706) 0:25:30.447 ******* 2026-02-08 06:16:48.941677 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.941695 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:48.941711 | orchestrator | 2026-02-08 06:16:48.941728 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-08 06:16:48.941742 | orchestrator | Sunday 08 February 2026 06:16:33 +0000 (0:00:01.114) 0:25:31.561 ******* 2026-02-08 06:16:48.941754 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.941765 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:48.941776 | orchestrator | 2026-02-08 06:16:48.941787 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-08 06:16:48.941799 | orchestrator | Sunday 08 February 2026 06:16:34 +0000 (0:00:01.093) 0:25:32.655 ******* 2026-02-08 06:16:48.941810 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.941821 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:48.941832 | orchestrator | 2026-02-08 06:16:48.941844 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-08 06:16:48.941855 | orchestrator | Sunday 08 February 2026 06:16:35 +0000 (0:00:01.296) 0:25:33.951 ******* 2026-02-08 06:16:48.941866 | orchestrator | changed: [testbed-node-3] 2026-02-08 06:16:48.941877 | orchestrator | changed: [testbed-node-4] 2026-02-08 06:16:48.941889 | orchestrator | 2026-02-08 06:16:48.941900 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-08 06:16:48.941912 | orchestrator | Sunday 08 February 2026 06:16:38 +0000 (0:00:02.376) 0:25:36.328 ******* 2026-02-08 06:16:48.941923 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.941935 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:16:48.941973 | orchestrator | 2026-02-08 06:16:48.941984 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-08 06:16:48.941993 | orchestrator | Sunday 08 February 2026 06:16:39 +0000 (0:00:01.329) 0:25:37.657 ******* 2026-02-08 06:16:48.942003 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:48.942013 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:16:48.942090 | orchestrator | 2026-02-08 06:16:48.942100 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-08 06:16:48.942110 | orchestrator | 2026-02-08 06:16:48.942120 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:16:48.942129 | orchestrator | Sunday 08 February 2026 06:16:42 +0000 (0:00:02.745) 0:25:40.402 ******* 2026-02-08 06:16:48.942139 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-08 06:16:48.942148 | orchestrator | 2026-02-08 06:16:48.942158 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:16:48.942167 | orchestrator | Sunday 08 February 2026 06:16:42 +0000 (0:00:00.252) 0:25:40.654 ******* 2026-02-08 06:16:48.942191 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.942201 | orchestrator | 2026-02-08 06:16:48.942211 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:16:48.942220 | orchestrator | Sunday 08 February 2026 06:16:43 +0000 (0:00:00.450) 0:25:41.105 ******* 2026-02-08 06:16:48.942230 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.942254 | orchestrator | 2026-02-08 06:16:48.942264 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:16:48.942274 | orchestrator | Sunday 08 February 2026 06:16:43 +0000 (0:00:00.131) 0:25:41.237 ******* 2026-02-08 06:16:48.942283 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.942293 | orchestrator | 2026-02-08 06:16:48.942302 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:16:48.942311 | orchestrator | Sunday 08 February 2026 06:16:43 +0000 (0:00:00.463) 0:25:41.700 ******* 2026-02-08 06:16:48.942321 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.942330 | orchestrator | 2026-02-08 06:16:48.942339 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:16:48.942349 | orchestrator | Sunday 08 February 2026 06:16:43 +0000 (0:00:00.142) 0:25:41.842 ******* 2026-02-08 06:16:48.942358 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.942368 | orchestrator | 2026-02-08 06:16:48.942377 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:16:48.942387 | orchestrator | Sunday 08 February 2026 06:16:43 +0000 (0:00:00.153) 0:25:41.996 ******* 2026-02-08 06:16:48.942396 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.942407 | orchestrator | 2026-02-08 06:16:48.942416 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:16:48.942426 | orchestrator | Sunday 08 February 2026 06:16:44 +0000 (0:00:00.153) 0:25:42.149 ******* 2026-02-08 06:16:48.942457 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:48.942468 | orchestrator | 2026-02-08 06:16:48.942478 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:16:48.942487 | orchestrator | Sunday 08 February 2026 06:16:44 +0000 (0:00:00.164) 0:25:42.314 ******* 2026-02-08 06:16:48.942497 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.942506 | orchestrator | 2026-02-08 06:16:48.942516 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:16:48.942526 | orchestrator | Sunday 08 February 2026 06:16:44 +0000 (0:00:00.483) 0:25:42.797 ******* 2026-02-08 06:16:48.942536 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:16:48.942545 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:16:48.942555 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:16:48.942565 | orchestrator | 2026-02-08 06:16:48.942574 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:16:48.942584 | orchestrator | Sunday 08 February 2026 06:16:45 +0000 (0:00:00.716) 0:25:43.514 ******* 2026-02-08 06:16:48.942593 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:48.942603 | orchestrator | 2026-02-08 06:16:48.942612 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:16:48.942622 | orchestrator | Sunday 08 February 2026 06:16:45 +0000 (0:00:00.271) 0:25:43.785 ******* 2026-02-08 06:16:48.942631 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:16:48.942641 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:16:48.942650 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:16:48.942660 | orchestrator | 2026-02-08 06:16:48.942669 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:16:48.942679 | orchestrator | Sunday 08 February 2026 06:16:47 +0000 (0:00:01.916) 0:25:45.701 ******* 2026-02-08 06:16:48.942688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 06:16:48.942699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 06:16:48.942708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 06:16:48.942718 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:48.942727 | orchestrator | 2026-02-08 06:16:48.942737 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:16:48.942753 | orchestrator | Sunday 08 February 2026 06:16:48 +0000 (0:00:00.445) 0:25:46.146 ******* 2026-02-08 06:16:48.942765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:16:48.942778 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:16:48.942788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:16:48.942798 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:48.942808 | orchestrator | 2026-02-08 06:16:48.942817 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:16:48.942827 | orchestrator | Sunday 08 February 2026 06:16:48 +0000 (0:00:00.656) 0:25:46.803 ******* 2026-02-08 06:16:48.942843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:48.942856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:48.942867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:48.942877 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:48.942886 | orchestrator | 2026-02-08 06:16:48.942903 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:16:53.372145 | orchestrator | Sunday 08 February 2026 06:16:48 +0000 (0:00:00.171) 0:25:46.975 ******* 2026-02-08 06:16:53.372248 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:16:46.314932', 'end': '2026-02-08 06:16:46.373755', 'delta': '0:00:00.058823', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:16:53.372269 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:16:46.904783', 'end': '2026-02-08 06:16:46.951325', 'delta': '0:00:00.046542', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:16:53.372307 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:16:47.468357', 'end': '2026-02-08 06:16:47.513073', 'delta': '0:00:00.044716', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:16:53.372320 | orchestrator | 2026-02-08 06:16:53.372333 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:16:53.372345 | orchestrator | Sunday 08 February 2026 06:16:49 +0000 (0:00:00.212) 0:25:47.187 ******* 2026-02-08 06:16:53.372360 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:53.372380 | orchestrator | 2026-02-08 06:16:53.372393 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:16:53.372404 | orchestrator | Sunday 08 February 2026 06:16:49 +0000 (0:00:00.264) 0:25:47.452 ******* 2026-02-08 06:16:53.372414 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:53.372426 | orchestrator | 2026-02-08 06:16:53.372437 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:16:53.372462 | orchestrator | Sunday 08 February 2026 06:16:49 +0000 (0:00:00.260) 0:25:47.713 ******* 2026-02-08 06:16:53.372473 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:53.372484 | orchestrator | 2026-02-08 06:16:53.372495 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:16:53.372505 | orchestrator | Sunday 08 February 2026 06:16:49 +0000 (0:00:00.168) 0:25:47.881 ******* 2026-02-08 06:16:53.372516 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:16:53.372527 | orchestrator | 2026-02-08 06:16:53.372538 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:16:53.372548 | orchestrator | Sunday 08 February 2026 06:16:50 +0000 (0:00:00.975) 0:25:48.856 ******* 2026-02-08 06:16:53.372561 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:53.372574 | orchestrator | 2026-02-08 06:16:53.372586 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:16:53.372598 | orchestrator | Sunday 08 February 2026 06:16:50 +0000 (0:00:00.157) 0:25:49.013 ******* 2026-02-08 06:16:53.372611 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:53.372623 | orchestrator | 2026-02-08 06:16:53.372635 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:16:53.372648 | orchestrator | Sunday 08 February 2026 06:16:51 +0000 (0:00:00.130) 0:25:49.144 ******* 2026-02-08 06:16:53.372660 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:53.372672 | orchestrator | 2026-02-08 06:16:53.372684 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:16:53.372697 | orchestrator | Sunday 08 February 2026 06:16:52 +0000 (0:00:00.978) 0:25:50.123 ******* 2026-02-08 06:16:53.372710 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:53.372722 | orchestrator | 2026-02-08 06:16:53.372735 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:16:53.372767 | orchestrator | Sunday 08 February 2026 06:16:52 +0000 (0:00:00.138) 0:25:50.261 ******* 2026-02-08 06:16:53.372781 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:53.372812 | orchestrator | 2026-02-08 06:16:53.372824 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:16:53.372837 | orchestrator | Sunday 08 February 2026 06:16:52 +0000 (0:00:00.136) 0:25:50.398 ******* 2026-02-08 06:16:53.372850 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:53.372863 | orchestrator | 2026-02-08 06:16:53.372875 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:16:53.372889 | orchestrator | Sunday 08 February 2026 06:16:52 +0000 (0:00:00.160) 0:25:50.559 ******* 2026-02-08 06:16:53.372902 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:53.372913 | orchestrator | 2026-02-08 06:16:53.372924 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:16:53.372935 | orchestrator | Sunday 08 February 2026 06:16:52 +0000 (0:00:00.116) 0:25:50.675 ******* 2026-02-08 06:16:53.373012 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:53.373025 | orchestrator | 2026-02-08 06:16:53.373036 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:16:53.373047 | orchestrator | Sunday 08 February 2026 06:16:52 +0000 (0:00:00.168) 0:25:50.843 ******* 2026-02-08 06:16:53.373057 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:53.373068 | orchestrator | 2026-02-08 06:16:53.373079 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:16:53.373091 | orchestrator | Sunday 08 February 2026 06:16:52 +0000 (0:00:00.119) 0:25:50.963 ******* 2026-02-08 06:16:53.373101 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:53.373112 | orchestrator | 2026-02-08 06:16:53.373123 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:16:53.373134 | orchestrator | Sunday 08 February 2026 06:16:53 +0000 (0:00:00.235) 0:25:51.198 ******* 2026-02-08 06:16:53.373146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:16:53.373160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'uuids': ['f792a4bc-313c-4001-af18-36958a398c99'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH']}})  2026-02-08 06:16:53.373177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b3b2ead', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:16:53.373190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4']}})  2026-02-08 06:16:53.373216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:16:53.717055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:16:53.717184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:16:53.717218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:16:53.717239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc', 'dm-uuid-CRYPT-LUKS2-b54d8ff93c624789820c72a73c143a76-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:16:53.717259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:16:53.717303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'uuids': ['b54d8ff9-3c62-4789-820c-72a73c143a76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc']}})  2026-02-08 06:16:53.717357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757']}})  2026-02-08 06:16:53.717404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:16:53.717421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8eb95c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:16:53.717441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:16:53.717454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:16:53.717475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH', 'dm-uuid-CRYPT-LUKS2-f792a4bc313c4001af1836958a398c99-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:16:53.717490 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:53.717505 | orchestrator | 2026-02-08 06:16:53.717519 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:16:53.717533 | orchestrator | Sunday 08 February 2026 06:16:53 +0000 (0:00:00.350) 0:25:51.548 ******* 2026-02-08 06:16:53.717556 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916530 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757', 'dm-uuid-LVM-L5RzS25dNAwEfaxQtT2dCZejWp1FAHTXCd6gt7yIXasPp3uR45a3LLQBdDhLIQVH'], 'uuids': ['f792a4bc-313c-4001-af18-36958a398c99'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055', 'scsi-SQEMU_QEMU_HARDDISK_1b3b2ead-9b22-4b4d-a30d-f81b3b57c055'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b3b2ead', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-a30P2b-igV6-wMzf-UnqL-xNhF-mfyy-hPRy0q', 'scsi-0QEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1', 'scsi-SQEMU_QEMU_HARDDISK_f936cccd-0c4c-4cd7-b507-1bacbfb024c1'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916695 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc', 'dm-uuid-CRYPT-LUKS2-b54d8ff93c624789820c72a73c143a76-cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--658e9559--2696--538a--a0a4--811fe95d0be4-osd--block--658e9559--2696--538a--a0a4--811fe95d0be4', 'dm-uuid-LVM-0Xks5ejJzKHIGgn8kHKR683njlf26z09cwGJpvXAPY9denpvHRscKicxVHCDOUqc'], 'uuids': ['b54d8ff9-3c62-4789-820c-72a73c143a76'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f936cccd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cwGJpv-XAPY-9den-pvHR-scKi-cxVH-CDOUqc']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:53.916809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-loHc10-mddm-V6c9-PmX5-I7TN-7pE7-Bu9UYi', 'scsi-0QEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e', 'scsi-SQEMU_QEMU_HARDDISK_f64e84f9-05a0-4abf-b38a-86e604a2541e'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f64e84f9', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--edf9913e--48af--595a--836b--515c584cb757-osd--block--edf9913e--48af--595a--836b--515c584cb757']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:58.573539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:58.573703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8eb95c7e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1', 'scsi-SQEMU_QEMU_HARDDISK_8eb95c7e-79eb-481c-a9c3-b8351915337f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:58.573752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:58.573785 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:58.573799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH', 'dm-uuid-CRYPT-LUKS2-f792a4bc313c4001af1836958a398c99-Cd6gt7-yIXa-sPp3-uR45-a3LL-QBdD-hLIQVH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:16:58.573813 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:58.573827 | orchestrator | 2026-02-08 06:16:58.573840 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:16:58.573852 | orchestrator | Sunday 08 February 2026 06:16:53 +0000 (0:00:00.411) 0:25:51.960 ******* 2026-02-08 06:16:58.573863 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:58.573874 | orchestrator | 2026-02-08 06:16:58.573886 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:16:58.573904 | orchestrator | Sunday 08 February 2026 06:16:54 +0000 (0:00:00.473) 0:25:52.433 ******* 2026-02-08 06:16:58.573930 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:58.573941 | orchestrator | 2026-02-08 06:16:58.573982 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:16:58.573995 | orchestrator | Sunday 08 February 2026 06:16:54 +0000 (0:00:00.134) 0:25:52.567 ******* 2026-02-08 06:16:58.574005 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:58.574071 | orchestrator | 2026-02-08 06:16:58.574087 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:16:58.574099 | orchestrator | Sunday 08 February 2026 06:16:55 +0000 (0:00:00.502) 0:25:53.070 ******* 2026-02-08 06:16:58.574112 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:58.574124 | orchestrator | 2026-02-08 06:16:58.574136 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:16:58.574156 | orchestrator | Sunday 08 February 2026 06:16:55 +0000 (0:00:00.521) 0:25:53.592 ******* 2026-02-08 06:16:58.574179 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:58.574269 | orchestrator | 2026-02-08 06:16:58.574285 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:16:58.574297 | orchestrator | Sunday 08 February 2026 06:16:55 +0000 (0:00:00.266) 0:25:53.858 ******* 2026-02-08 06:16:58.574309 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:58.574320 | orchestrator | 2026-02-08 06:16:58.574331 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:16:58.574342 | orchestrator | Sunday 08 February 2026 06:16:55 +0000 (0:00:00.151) 0:25:54.009 ******* 2026-02-08 06:16:58.574352 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-08 06:16:58.574364 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-08 06:16:58.574375 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-08 06:16:58.574385 | orchestrator | 2026-02-08 06:16:58.574396 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:16:58.574407 | orchestrator | Sunday 08 February 2026 06:16:56 +0000 (0:00:00.726) 0:25:54.736 ******* 2026-02-08 06:16:58.574418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-08 06:16:58.574430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-08 06:16:58.574440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-08 06:16:58.574451 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:58.574462 | orchestrator | 2026-02-08 06:16:58.574473 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:16:58.574484 | orchestrator | Sunday 08 February 2026 06:16:56 +0000 (0:00:00.168) 0:25:54.904 ******* 2026-02-08 06:16:58.574495 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-08 06:16:58.574506 | orchestrator | 2026-02-08 06:16:58.574518 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:16:58.574531 | orchestrator | Sunday 08 February 2026 06:16:57 +0000 (0:00:00.215) 0:25:55.120 ******* 2026-02-08 06:16:58.574542 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:58.574553 | orchestrator | 2026-02-08 06:16:58.574563 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:16:58.574574 | orchestrator | Sunday 08 February 2026 06:16:57 +0000 (0:00:00.148) 0:25:55.268 ******* 2026-02-08 06:16:58.574585 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:58.574596 | orchestrator | 2026-02-08 06:16:58.574606 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:16:58.574617 | orchestrator | Sunday 08 February 2026 06:16:57 +0000 (0:00:00.151) 0:25:55.420 ******* 2026-02-08 06:16:58.574628 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:16:58.574639 | orchestrator | 2026-02-08 06:16:58.574650 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:16:58.574661 | orchestrator | Sunday 08 February 2026 06:16:57 +0000 (0:00:00.168) 0:25:55.588 ******* 2026-02-08 06:16:58.574681 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:16:58.574692 | orchestrator | 2026-02-08 06:16:58.574703 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:16:58.574713 | orchestrator | Sunday 08 February 2026 06:16:57 +0000 (0:00:00.245) 0:25:55.834 ******* 2026-02-08 06:16:58.574735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:17:14.083590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:17:14.083701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:17:14.083715 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.083726 | orchestrator | 2026-02-08 06:17:14.083738 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:17:14.083750 | orchestrator | Sunday 08 February 2026 06:16:58 +0000 (0:00:00.781) 0:25:56.616 ******* 2026-02-08 06:17:14.083760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:17:14.083771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:17:14.083789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:17:14.083806 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.083823 | orchestrator | 2026-02-08 06:17:14.083841 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:17:14.083853 | orchestrator | Sunday 08 February 2026 06:16:59 +0000 (0:00:00.792) 0:25:57.408 ******* 2026-02-08 06:17:14.083862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:17:14.083872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:17:14.083882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:17:14.083892 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.083901 | orchestrator | 2026-02-08 06:17:14.083911 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:17:14.083921 | orchestrator | Sunday 08 February 2026 06:17:00 +0000 (0:00:01.119) 0:25:58.528 ******* 2026-02-08 06:17:14.083931 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.083942 | orchestrator | 2026-02-08 06:17:14.083997 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:17:14.084007 | orchestrator | Sunday 08 February 2026 06:17:00 +0000 (0:00:00.167) 0:25:58.696 ******* 2026-02-08 06:17:14.084017 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 06:17:14.084027 | orchestrator | 2026-02-08 06:17:14.084037 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:17:14.084046 | orchestrator | Sunday 08 February 2026 06:17:00 +0000 (0:00:00.339) 0:25:59.035 ******* 2026-02-08 06:17:14.084056 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:17:14.084067 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:17:14.084091 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:17:14.084101 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 06:17:14.084111 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:17:14.084121 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:17:14.084132 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:17:14.084144 | orchestrator | 2026-02-08 06:17:14.084156 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:17:14.084168 | orchestrator | Sunday 08 February 2026 06:17:01 +0000 (0:00:00.832) 0:25:59.868 ******* 2026-02-08 06:17:14.084179 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:17:14.084191 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:17:14.084222 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:17:14.084234 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-08 06:17:14.084246 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:17:14.084258 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:17:14.084269 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:17:14.084280 | orchestrator | 2026-02-08 06:17:14.084289 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-08 06:17:14.084299 | orchestrator | Sunday 08 February 2026 06:17:03 +0000 (0:00:01.649) 0:26:01.517 ******* 2026-02-08 06:17:14.084308 | orchestrator | changed: [testbed-node-3] 2026-02-08 06:17:14.084318 | orchestrator | 2026-02-08 06:17:14.084327 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-08 06:17:14.084337 | orchestrator | Sunday 08 February 2026 06:17:04 +0000 (0:00:01.261) 0:26:02.778 ******* 2026-02-08 06:17:14.084347 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:17:14.084358 | orchestrator | 2026-02-08 06:17:14.084367 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-08 06:17:14.084377 | orchestrator | Sunday 08 February 2026 06:17:06 +0000 (0:00:01.981) 0:26:04.759 ******* 2026-02-08 06:17:14.084386 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:17:14.084396 | orchestrator | 2026-02-08 06:17:14.084406 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:17:14.084415 | orchestrator | Sunday 08 February 2026 06:17:07 +0000 (0:00:01.241) 0:26:06.001 ******* 2026-02-08 06:17:14.084425 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-08 06:17:14.084434 | orchestrator | 2026-02-08 06:17:14.084444 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:17:14.084471 | orchestrator | Sunday 08 February 2026 06:17:08 +0000 (0:00:00.205) 0:26:06.206 ******* 2026-02-08 06:17:14.084481 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-08 06:17:14.084491 | orchestrator | 2026-02-08 06:17:14.084501 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:17:14.084511 | orchestrator | Sunday 08 February 2026 06:17:08 +0000 (0:00:00.215) 0:26:06.422 ******* 2026-02-08 06:17:14.084520 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.084530 | orchestrator | 2026-02-08 06:17:14.084539 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:17:14.084549 | orchestrator | Sunday 08 February 2026 06:17:08 +0000 (0:00:00.459) 0:26:06.881 ******* 2026-02-08 06:17:14.084559 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.084568 | orchestrator | 2026-02-08 06:17:14.084578 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:17:14.084587 | orchestrator | Sunday 08 February 2026 06:17:09 +0000 (0:00:00.519) 0:26:07.401 ******* 2026-02-08 06:17:14.084597 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.084607 | orchestrator | 2026-02-08 06:17:14.084616 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:17:14.084625 | orchestrator | Sunday 08 February 2026 06:17:09 +0000 (0:00:00.537) 0:26:07.939 ******* 2026-02-08 06:17:14.084635 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.084644 | orchestrator | 2026-02-08 06:17:14.084654 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:17:14.084663 | orchestrator | Sunday 08 February 2026 06:17:10 +0000 (0:00:00.546) 0:26:08.485 ******* 2026-02-08 06:17:14.084673 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.084683 | orchestrator | 2026-02-08 06:17:14.084708 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:17:14.084717 | orchestrator | Sunday 08 February 2026 06:17:10 +0000 (0:00:00.146) 0:26:08.632 ******* 2026-02-08 06:17:14.084727 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.084736 | orchestrator | 2026-02-08 06:17:14.084746 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:17:14.084755 | orchestrator | Sunday 08 February 2026 06:17:10 +0000 (0:00:00.169) 0:26:08.802 ******* 2026-02-08 06:17:14.084765 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.084775 | orchestrator | 2026-02-08 06:17:14.084784 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:17:14.084794 | orchestrator | Sunday 08 February 2026 06:17:10 +0000 (0:00:00.153) 0:26:08.955 ******* 2026-02-08 06:17:14.084803 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.084813 | orchestrator | 2026-02-08 06:17:14.084828 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:17:14.084838 | orchestrator | Sunday 08 February 2026 06:17:11 +0000 (0:00:00.554) 0:26:09.510 ******* 2026-02-08 06:17:14.084848 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.084857 | orchestrator | 2026-02-08 06:17:14.084867 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:17:14.084877 | orchestrator | Sunday 08 February 2026 06:17:12 +0000 (0:00:00.555) 0:26:10.066 ******* 2026-02-08 06:17:14.084886 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.084896 | orchestrator | 2026-02-08 06:17:14.084905 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:17:14.084915 | orchestrator | Sunday 08 February 2026 06:17:12 +0000 (0:00:00.143) 0:26:10.209 ******* 2026-02-08 06:17:14.084924 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.084934 | orchestrator | 2026-02-08 06:17:14.084944 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:17:14.084986 | orchestrator | Sunday 08 February 2026 06:17:12 +0000 (0:00:00.130) 0:26:10.340 ******* 2026-02-08 06:17:14.084996 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.085005 | orchestrator | 2026-02-08 06:17:14.085015 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:17:14.085025 | orchestrator | Sunday 08 February 2026 06:17:12 +0000 (0:00:00.167) 0:26:10.508 ******* 2026-02-08 06:17:14.085034 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.085044 | orchestrator | 2026-02-08 06:17:14.085053 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:17:14.085063 | orchestrator | Sunday 08 February 2026 06:17:12 +0000 (0:00:00.151) 0:26:10.659 ******* 2026-02-08 06:17:14.085072 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.085082 | orchestrator | 2026-02-08 06:17:14.085091 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:17:14.085101 | orchestrator | Sunday 08 February 2026 06:17:13 +0000 (0:00:00.489) 0:26:11.148 ******* 2026-02-08 06:17:14.085110 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.085120 | orchestrator | 2026-02-08 06:17:14.085130 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:17:14.085140 | orchestrator | Sunday 08 February 2026 06:17:13 +0000 (0:00:00.139) 0:26:11.288 ******* 2026-02-08 06:17:14.085149 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.085159 | orchestrator | 2026-02-08 06:17:14.085168 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:17:14.085178 | orchestrator | Sunday 08 February 2026 06:17:13 +0000 (0:00:00.141) 0:26:11.430 ******* 2026-02-08 06:17:14.085187 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.085197 | orchestrator | 2026-02-08 06:17:14.085206 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:17:14.085216 | orchestrator | Sunday 08 February 2026 06:17:13 +0000 (0:00:00.143) 0:26:11.573 ******* 2026-02-08 06:17:14.085225 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.085235 | orchestrator | 2026-02-08 06:17:14.085244 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:17:14.085261 | orchestrator | Sunday 08 February 2026 06:17:13 +0000 (0:00:00.172) 0:26:11.746 ******* 2026-02-08 06:17:14.085270 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:14.085280 | orchestrator | 2026-02-08 06:17:14.085289 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:17:14.085299 | orchestrator | Sunday 08 February 2026 06:17:13 +0000 (0:00:00.233) 0:26:11.979 ******* 2026-02-08 06:17:14.085308 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:14.085318 | orchestrator | 2026-02-08 06:17:14.085334 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:17:25.986593 | orchestrator | Sunday 08 February 2026 06:17:14 +0000 (0:00:00.140) 0:26:12.119 ******* 2026-02-08 06:17:25.986735 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.986755 | orchestrator | 2026-02-08 06:17:25.986768 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:17:25.986780 | orchestrator | Sunday 08 February 2026 06:17:14 +0000 (0:00:00.124) 0:26:12.244 ******* 2026-02-08 06:17:25.986792 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.986803 | orchestrator | 2026-02-08 06:17:25.986814 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:17:25.986825 | orchestrator | Sunday 08 February 2026 06:17:14 +0000 (0:00:00.132) 0:26:12.377 ******* 2026-02-08 06:17:25.986836 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.986847 | orchestrator | 2026-02-08 06:17:25.986858 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:17:25.986869 | orchestrator | Sunday 08 February 2026 06:17:14 +0000 (0:00:00.130) 0:26:12.507 ******* 2026-02-08 06:17:25.986880 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.986891 | orchestrator | 2026-02-08 06:17:25.986902 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:17:25.986913 | orchestrator | Sunday 08 February 2026 06:17:14 +0000 (0:00:00.140) 0:26:12.648 ******* 2026-02-08 06:17:25.986924 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.986935 | orchestrator | 2026-02-08 06:17:25.986946 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:17:25.987017 | orchestrator | Sunday 08 February 2026 06:17:14 +0000 (0:00:00.133) 0:26:12.781 ******* 2026-02-08 06:17:25.987030 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987041 | orchestrator | 2026-02-08 06:17:25.987052 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:17:25.987064 | orchestrator | Sunday 08 February 2026 06:17:14 +0000 (0:00:00.131) 0:26:12.913 ******* 2026-02-08 06:17:25.987075 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987086 | orchestrator | 2026-02-08 06:17:25.987097 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:17:25.987111 | orchestrator | Sunday 08 February 2026 06:17:15 +0000 (0:00:00.510) 0:26:13.423 ******* 2026-02-08 06:17:25.987125 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987137 | orchestrator | 2026-02-08 06:17:25.987151 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:17:25.987183 | orchestrator | Sunday 08 February 2026 06:17:15 +0000 (0:00:00.144) 0:26:13.568 ******* 2026-02-08 06:17:25.987194 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987206 | orchestrator | 2026-02-08 06:17:25.987217 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:17:25.987227 | orchestrator | Sunday 08 February 2026 06:17:15 +0000 (0:00:00.134) 0:26:13.703 ******* 2026-02-08 06:17:25.987238 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987249 | orchestrator | 2026-02-08 06:17:25.987260 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:17:25.987271 | orchestrator | Sunday 08 February 2026 06:17:15 +0000 (0:00:00.147) 0:26:13.851 ******* 2026-02-08 06:17:25.987282 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987318 | orchestrator | 2026-02-08 06:17:25.987330 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:17:25.987341 | orchestrator | Sunday 08 February 2026 06:17:16 +0000 (0:00:00.233) 0:26:14.084 ******* 2026-02-08 06:17:25.987352 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:25.987364 | orchestrator | 2026-02-08 06:17:25.987375 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:17:25.987386 | orchestrator | Sunday 08 February 2026 06:17:16 +0000 (0:00:00.882) 0:26:14.966 ******* 2026-02-08 06:17:25.987397 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:25.987408 | orchestrator | 2026-02-08 06:17:25.987419 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:17:25.987430 | orchestrator | Sunday 08 February 2026 06:17:18 +0000 (0:00:01.241) 0:26:16.208 ******* 2026-02-08 06:17:25.987441 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-08 06:17:25.987454 | orchestrator | 2026-02-08 06:17:25.987464 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:17:25.987475 | orchestrator | Sunday 08 February 2026 06:17:18 +0000 (0:00:00.237) 0:26:16.445 ******* 2026-02-08 06:17:25.987486 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987497 | orchestrator | 2026-02-08 06:17:25.987508 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:17:25.987519 | orchestrator | Sunday 08 February 2026 06:17:18 +0000 (0:00:00.183) 0:26:16.629 ******* 2026-02-08 06:17:25.987529 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987540 | orchestrator | 2026-02-08 06:17:25.987552 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:17:25.987563 | orchestrator | Sunday 08 February 2026 06:17:18 +0000 (0:00:00.140) 0:26:16.770 ******* 2026-02-08 06:17:25.987574 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:17:25.987585 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:17:25.987596 | orchestrator | 2026-02-08 06:17:25.987607 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:17:25.987618 | orchestrator | Sunday 08 February 2026 06:17:19 +0000 (0:00:00.818) 0:26:17.589 ******* 2026-02-08 06:17:25.987629 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:25.987639 | orchestrator | 2026-02-08 06:17:25.987650 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:17:25.987661 | orchestrator | Sunday 08 February 2026 06:17:20 +0000 (0:00:00.793) 0:26:18.382 ******* 2026-02-08 06:17:25.987672 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987683 | orchestrator | 2026-02-08 06:17:25.987694 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:17:25.987725 | orchestrator | Sunday 08 February 2026 06:17:20 +0000 (0:00:00.162) 0:26:18.544 ******* 2026-02-08 06:17:25.987737 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987749 | orchestrator | 2026-02-08 06:17:25.987759 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:17:25.987770 | orchestrator | Sunday 08 February 2026 06:17:20 +0000 (0:00:00.172) 0:26:18.716 ******* 2026-02-08 06:17:25.987781 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.987792 | orchestrator | 2026-02-08 06:17:25.987803 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:17:25.987814 | orchestrator | Sunday 08 February 2026 06:17:20 +0000 (0:00:00.142) 0:26:18.859 ******* 2026-02-08 06:17:25.987825 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-08 06:17:25.987835 | orchestrator | 2026-02-08 06:17:25.987846 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:17:25.987857 | orchestrator | Sunday 08 February 2026 06:17:21 +0000 (0:00:00.258) 0:26:19.117 ******* 2026-02-08 06:17:25.987868 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:25.987878 | orchestrator | 2026-02-08 06:17:25.987908 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:17:25.987919 | orchestrator | Sunday 08 February 2026 06:17:21 +0000 (0:00:00.727) 0:26:19.844 ******* 2026-02-08 06:17:25.987930 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:17:25.987941 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:17:25.987979 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:17:25.987992 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988003 | orchestrator | 2026-02-08 06:17:25.988014 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:17:25.988025 | orchestrator | Sunday 08 February 2026 06:17:21 +0000 (0:00:00.160) 0:26:20.005 ******* 2026-02-08 06:17:25.988036 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988047 | orchestrator | 2026-02-08 06:17:25.988058 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:17:25.988068 | orchestrator | Sunday 08 February 2026 06:17:22 +0000 (0:00:00.177) 0:26:20.183 ******* 2026-02-08 06:17:25.988079 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988090 | orchestrator | 2026-02-08 06:17:25.988101 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:17:25.988119 | orchestrator | Sunday 08 February 2026 06:17:22 +0000 (0:00:00.197) 0:26:20.380 ******* 2026-02-08 06:17:25.988130 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988141 | orchestrator | 2026-02-08 06:17:25.988152 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:17:25.988194 | orchestrator | Sunday 08 February 2026 06:17:22 +0000 (0:00:00.143) 0:26:20.524 ******* 2026-02-08 06:17:25.988206 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988217 | orchestrator | 2026-02-08 06:17:25.988228 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:17:25.988239 | orchestrator | Sunday 08 February 2026 06:17:22 +0000 (0:00:00.172) 0:26:20.696 ******* 2026-02-08 06:17:25.988250 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988261 | orchestrator | 2026-02-08 06:17:25.988272 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:17:25.988282 | orchestrator | Sunday 08 February 2026 06:17:22 +0000 (0:00:00.174) 0:26:20.871 ******* 2026-02-08 06:17:25.988293 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:25.988304 | orchestrator | 2026-02-08 06:17:25.988315 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:17:25.988325 | orchestrator | Sunday 08 February 2026 06:17:24 +0000 (0:00:01.795) 0:26:22.666 ******* 2026-02-08 06:17:25.988336 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:25.988347 | orchestrator | 2026-02-08 06:17:25.988358 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:17:25.988380 | orchestrator | Sunday 08 February 2026 06:17:24 +0000 (0:00:00.137) 0:26:22.804 ******* 2026-02-08 06:17:25.988392 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-08 06:17:25.988403 | orchestrator | 2026-02-08 06:17:25.988414 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:17:25.988424 | orchestrator | Sunday 08 February 2026 06:17:24 +0000 (0:00:00.228) 0:26:23.033 ******* 2026-02-08 06:17:25.988435 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988446 | orchestrator | 2026-02-08 06:17:25.988457 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:17:25.988468 | orchestrator | Sunday 08 February 2026 06:17:25 +0000 (0:00:00.160) 0:26:23.193 ******* 2026-02-08 06:17:25.988479 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988490 | orchestrator | 2026-02-08 06:17:25.988501 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:17:25.988511 | orchestrator | Sunday 08 February 2026 06:17:25 +0000 (0:00:00.152) 0:26:23.346 ******* 2026-02-08 06:17:25.988530 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988541 | orchestrator | 2026-02-08 06:17:25.988552 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:17:25.988563 | orchestrator | Sunday 08 February 2026 06:17:25 +0000 (0:00:00.155) 0:26:23.501 ******* 2026-02-08 06:17:25.988573 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988584 | orchestrator | 2026-02-08 06:17:25.988595 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:17:25.988606 | orchestrator | Sunday 08 February 2026 06:17:25 +0000 (0:00:00.186) 0:26:23.687 ******* 2026-02-08 06:17:25.988617 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988628 | orchestrator | 2026-02-08 06:17:25.988639 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:17:25.988650 | orchestrator | Sunday 08 February 2026 06:17:25 +0000 (0:00:00.172) 0:26:23.860 ******* 2026-02-08 06:17:25.988661 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:25.988672 | orchestrator | 2026-02-08 06:17:25.988689 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:17:45.195257 | orchestrator | Sunday 08 February 2026 06:17:25 +0000 (0:00:00.162) 0:26:24.023 ******* 2026-02-08 06:17:45.195367 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.195384 | orchestrator | 2026-02-08 06:17:45.195397 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:17:45.195409 | orchestrator | Sunday 08 February 2026 06:17:26 +0000 (0:00:00.146) 0:26:24.169 ******* 2026-02-08 06:17:45.195420 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.195431 | orchestrator | 2026-02-08 06:17:45.195442 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:17:45.195453 | orchestrator | Sunday 08 February 2026 06:17:26 +0000 (0:00:00.161) 0:26:24.330 ******* 2026-02-08 06:17:45.195464 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:45.195477 | orchestrator | 2026-02-08 06:17:45.195488 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:17:45.195499 | orchestrator | Sunday 08 February 2026 06:17:26 +0000 (0:00:00.568) 0:26:24.899 ******* 2026-02-08 06:17:45.195510 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-08 06:17:45.195522 | orchestrator | 2026-02-08 06:17:45.195532 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:17:45.195543 | orchestrator | Sunday 08 February 2026 06:17:27 +0000 (0:00:00.218) 0:26:25.117 ******* 2026-02-08 06:17:45.195555 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-08 06:17:45.195566 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-08 06:17:45.195577 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-08 06:17:45.195588 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-08 06:17:45.195599 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-08 06:17:45.195610 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-08 06:17:45.195621 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-08 06:17:45.195632 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:17:45.195643 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:17:45.195654 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:17:45.195665 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:17:45.195693 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:17:45.195704 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:17:45.195715 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:17:45.195726 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-08 06:17:45.195737 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-08 06:17:45.195748 | orchestrator | 2026-02-08 06:17:45.195781 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:17:45.195793 | orchestrator | Sunday 08 February 2026 06:17:32 +0000 (0:00:05.380) 0:26:30.498 ******* 2026-02-08 06:17:45.195806 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-08 06:17:45.195819 | orchestrator | 2026-02-08 06:17:45.195833 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-08 06:17:45.195846 | orchestrator | Sunday 08 February 2026 06:17:32 +0000 (0:00:00.208) 0:26:30.707 ******* 2026-02-08 06:17:45.195858 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:17:45.195872 | orchestrator | 2026-02-08 06:17:45.195885 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-08 06:17:45.195898 | orchestrator | Sunday 08 February 2026 06:17:33 +0000 (0:00:00.494) 0:26:31.202 ******* 2026-02-08 06:17:45.195911 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:17:45.195925 | orchestrator | 2026-02-08 06:17:45.195937 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:17:45.195950 | orchestrator | Sunday 08 February 2026 06:17:34 +0000 (0:00:00.964) 0:26:32.166 ******* 2026-02-08 06:17:45.195986 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.195998 | orchestrator | 2026-02-08 06:17:45.196008 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:17:45.196019 | orchestrator | Sunday 08 February 2026 06:17:34 +0000 (0:00:00.146) 0:26:32.313 ******* 2026-02-08 06:17:45.196030 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196040 | orchestrator | 2026-02-08 06:17:45.196051 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:17:45.196062 | orchestrator | Sunday 08 February 2026 06:17:34 +0000 (0:00:00.140) 0:26:32.453 ******* 2026-02-08 06:17:45.196073 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196083 | orchestrator | 2026-02-08 06:17:45.196094 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:17:45.196105 | orchestrator | Sunday 08 February 2026 06:17:34 +0000 (0:00:00.152) 0:26:32.606 ******* 2026-02-08 06:17:45.196116 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196127 | orchestrator | 2026-02-08 06:17:45.196137 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:17:45.196148 | orchestrator | Sunday 08 February 2026 06:17:34 +0000 (0:00:00.135) 0:26:32.742 ******* 2026-02-08 06:17:45.196159 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196170 | orchestrator | 2026-02-08 06:17:45.196181 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:17:45.196192 | orchestrator | Sunday 08 February 2026 06:17:34 +0000 (0:00:00.150) 0:26:32.892 ******* 2026-02-08 06:17:45.196203 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196214 | orchestrator | 2026-02-08 06:17:45.196243 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:17:45.196254 | orchestrator | Sunday 08 February 2026 06:17:35 +0000 (0:00:00.471) 0:26:33.364 ******* 2026-02-08 06:17:45.196265 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196276 | orchestrator | 2026-02-08 06:17:45.196288 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:17:45.196299 | orchestrator | Sunday 08 February 2026 06:17:35 +0000 (0:00:00.141) 0:26:33.505 ******* 2026-02-08 06:17:45.196310 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196321 | orchestrator | 2026-02-08 06:17:45.196332 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:17:45.196343 | orchestrator | Sunday 08 February 2026 06:17:35 +0000 (0:00:00.191) 0:26:33.696 ******* 2026-02-08 06:17:45.196354 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196373 | orchestrator | 2026-02-08 06:17:45.196384 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:17:45.196395 | orchestrator | Sunday 08 February 2026 06:17:35 +0000 (0:00:00.159) 0:26:33.856 ******* 2026-02-08 06:17:45.196406 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196417 | orchestrator | 2026-02-08 06:17:45.196428 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:17:45.196439 | orchestrator | Sunday 08 February 2026 06:17:35 +0000 (0:00:00.136) 0:26:33.992 ******* 2026-02-08 06:17:45.196450 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196460 | orchestrator | 2026-02-08 06:17:45.196471 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:17:45.196482 | orchestrator | Sunday 08 February 2026 06:17:36 +0000 (0:00:00.165) 0:26:34.157 ******* 2026-02-08 06:17:45.196493 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-08 06:17:45.196504 | orchestrator | 2026-02-08 06:17:45.196515 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:17:45.196525 | orchestrator | Sunday 08 February 2026 06:17:39 +0000 (0:00:03.351) 0:26:37.509 ******* 2026-02-08 06:17:45.196536 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:17:45.196547 | orchestrator | 2026-02-08 06:17:45.196558 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:17:45.196575 | orchestrator | Sunday 08 February 2026 06:17:39 +0000 (0:00:00.186) 0:26:37.695 ******* 2026-02-08 06:17:45.196588 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-08 06:17:45.196603 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-08 06:17:45.196615 | orchestrator | 2026-02-08 06:17:45.196627 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:17:45.196638 | orchestrator | Sunday 08 February 2026 06:17:43 +0000 (0:00:03.679) 0:26:41.375 ******* 2026-02-08 06:17:45.196649 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196660 | orchestrator | 2026-02-08 06:17:45.196670 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:17:45.196681 | orchestrator | Sunday 08 February 2026 06:17:43 +0000 (0:00:00.151) 0:26:41.526 ******* 2026-02-08 06:17:45.196692 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196703 | orchestrator | 2026-02-08 06:17:45.196714 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:17:45.196725 | orchestrator | Sunday 08 February 2026 06:17:43 +0000 (0:00:00.146) 0:26:41.673 ******* 2026-02-08 06:17:45.196736 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196746 | orchestrator | 2026-02-08 06:17:45.196757 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:17:45.196768 | orchestrator | Sunday 08 February 2026 06:17:43 +0000 (0:00:00.143) 0:26:41.817 ******* 2026-02-08 06:17:45.196779 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196790 | orchestrator | 2026-02-08 06:17:45.196801 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:17:45.196811 | orchestrator | Sunday 08 February 2026 06:17:43 +0000 (0:00:00.174) 0:26:41.991 ******* 2026-02-08 06:17:45.196822 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.196833 | orchestrator | 2026-02-08 06:17:45.196850 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:17:45.196861 | orchestrator | Sunday 08 February 2026 06:17:44 +0000 (0:00:00.180) 0:26:42.172 ******* 2026-02-08 06:17:45.196872 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:17:45.196883 | orchestrator | 2026-02-08 06:17:45.196894 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:17:45.196905 | orchestrator | Sunday 08 February 2026 06:17:44 +0000 (0:00:00.613) 0:26:42.786 ******* 2026-02-08 06:17:45.196916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:17:45.196927 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:17:45.196939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:17:45.196950 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:17:45.197008 | orchestrator | 2026-02-08 06:17:45.197026 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:18:37.467021 | orchestrator | Sunday 08 February 2026 06:17:45 +0000 (0:00:00.442) 0:26:43.228 ******* 2026-02-08 06:18:37.467142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:18:37.467160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:18:37.467172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:18:37.467184 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:18:37.467196 | orchestrator | 2026-02-08 06:18:37.467208 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:18:37.467220 | orchestrator | Sunday 08 February 2026 06:17:45 +0000 (0:00:00.451) 0:26:43.679 ******* 2026-02-08 06:18:37.467235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-08 06:18:37.467254 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-08 06:18:37.467273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-08 06:18:37.467293 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:18:37.467312 | orchestrator | 2026-02-08 06:18:37.467330 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:18:37.467342 | orchestrator | Sunday 08 February 2026 06:17:46 +0000 (0:00:00.443) 0:26:44.122 ******* 2026-02-08 06:18:37.467353 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:18:37.467365 | orchestrator | 2026-02-08 06:18:37.467376 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:18:37.467387 | orchestrator | Sunday 08 February 2026 06:17:46 +0000 (0:00:00.190) 0:26:44.312 ******* 2026-02-08 06:18:37.467399 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-08 06:18:37.467410 | orchestrator | 2026-02-08 06:18:37.467420 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:18:37.467431 | orchestrator | Sunday 08 February 2026 06:17:46 +0000 (0:00:00.416) 0:26:44.729 ******* 2026-02-08 06:18:37.467450 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:18:37.467466 | orchestrator | 2026-02-08 06:18:37.467495 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-08 06:18:37.467514 | orchestrator | Sunday 08 February 2026 06:17:47 +0000 (0:00:00.830) 0:26:45.560 ******* 2026-02-08 06:18:37.467532 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-08 06:18:37.467550 | orchestrator | 2026-02-08 06:18:37.467569 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-08 06:18:37.467609 | orchestrator | Sunday 08 February 2026 06:17:48 +0000 (0:00:00.562) 0:26:46.123 ******* 2026-02-08 06:18:37.467631 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:18:37.467649 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 06:18:37.467667 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:18:37.467684 | orchestrator | 2026-02-08 06:18:37.467702 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:18:37.467721 | orchestrator | Sunday 08 February 2026 06:17:50 +0000 (0:00:02.333) 0:26:48.457 ******* 2026-02-08 06:18:37.467770 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-08 06:18:37.467792 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-08 06:18:37.467809 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:18:37.467826 | orchestrator | 2026-02-08 06:18:37.467845 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-08 06:18:37.467864 | orchestrator | Sunday 08 February 2026 06:17:51 +0000 (0:00:00.984) 0:26:49.441 ******* 2026-02-08 06:18:37.467884 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:18:37.467904 | orchestrator | 2026-02-08 06:18:37.467921 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-08 06:18:37.467941 | orchestrator | Sunday 08 February 2026 06:17:51 +0000 (0:00:00.469) 0:26:49.911 ******* 2026-02-08 06:18:37.467959 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-08 06:18:37.468032 | orchestrator | 2026-02-08 06:18:37.468050 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-08 06:18:37.468067 | orchestrator | Sunday 08 February 2026 06:17:52 +0000 (0:00:00.589) 0:26:50.500 ******* 2026-02-08 06:18:37.468080 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:18:37.468093 | orchestrator | 2026-02-08 06:18:37.468104 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-08 06:18:37.468115 | orchestrator | Sunday 08 February 2026 06:17:53 +0000 (0:00:00.639) 0:26:51.140 ******* 2026-02-08 06:18:37.468125 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:18:37.468138 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-08 06:18:37.468149 | orchestrator | 2026-02-08 06:18:37.468160 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-08 06:18:37.468171 | orchestrator | Sunday 08 February 2026 06:17:57 +0000 (0:00:04.163) 0:26:55.303 ******* 2026-02-08 06:18:37.468181 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:18:37.468192 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:18:37.468203 | orchestrator | 2026-02-08 06:18:37.468214 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:18:37.468225 | orchestrator | Sunday 08 February 2026 06:17:59 +0000 (0:00:02.205) 0:26:57.509 ******* 2026-02-08 06:18:37.468235 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-08 06:18:37.468246 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:18:37.468257 | orchestrator | 2026-02-08 06:18:37.468268 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-08 06:18:37.468279 | orchestrator | Sunday 08 February 2026 06:18:00 +0000 (0:00:00.968) 0:26:58.478 ******* 2026-02-08 06:18:37.468312 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-08 06:18:37.468324 | orchestrator | 2026-02-08 06:18:37.468335 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-08 06:18:37.468346 | orchestrator | Sunday 08 February 2026 06:18:01 +0000 (0:00:00.630) 0:26:59.108 ******* 2026-02-08 06:18:37.468356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468425 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:18:37.468448 | orchestrator | 2026-02-08 06:18:37.468459 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-08 06:18:37.468470 | orchestrator | Sunday 08 February 2026 06:18:02 +0000 (0:00:00.956) 0:27:00.065 ******* 2026-02-08 06:18:37.468481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:18:37.468544 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:18:37.468555 | orchestrator | 2026-02-08 06:18:37.468566 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-08 06:18:37.468577 | orchestrator | Sunday 08 February 2026 06:18:02 +0000 (0:00:00.968) 0:27:01.034 ******* 2026-02-08 06:18:37.468587 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:18:37.468599 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:18:37.468615 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:18:37.468633 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:18:37.468652 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:18:37.468670 | orchestrator | 2026-02-08 06:18:37.468690 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-08 06:18:37.468710 | orchestrator | Sunday 08 February 2026 06:18:34 +0000 (0:00:31.638) 0:27:32.672 ******* 2026-02-08 06:18:37.468728 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:18:37.468739 | orchestrator | 2026-02-08 06:18:37.468750 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-08 06:18:37.468761 | orchestrator | Sunday 08 February 2026 06:18:34 +0000 (0:00:00.150) 0:27:32.822 ******* 2026-02-08 06:18:37.468772 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:18:37.468783 | orchestrator | 2026-02-08 06:18:37.468793 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-08 06:18:37.468804 | orchestrator | Sunday 08 February 2026 06:18:35 +0000 (0:00:00.438) 0:27:33.261 ******* 2026-02-08 06:18:37.468815 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-08 06:18:37.468826 | orchestrator | 2026-02-08 06:18:37.468837 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-08 06:18:37.468848 | orchestrator | Sunday 08 February 2026 06:18:35 +0000 (0:00:00.588) 0:27:33.849 ******* 2026-02-08 06:18:37.468859 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-08 06:18:37.468869 | orchestrator | 2026-02-08 06:18:37.468880 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-08 06:18:37.468899 | orchestrator | Sunday 08 February 2026 06:18:36 +0000 (0:00:00.582) 0:27:34.432 ******* 2026-02-08 06:18:37.468910 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:18:37.468921 | orchestrator | 2026-02-08 06:18:37.468932 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-08 06:18:37.468952 | orchestrator | Sunday 08 February 2026 06:18:37 +0000 (0:00:01.068) 0:27:35.500 ******* 2026-02-08 06:18:50.728773 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:18:50.728889 | orchestrator | 2026-02-08 06:18:50.728907 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-08 06:18:50.728920 | orchestrator | Sunday 08 February 2026 06:18:38 +0000 (0:00:00.926) 0:27:36.426 ******* 2026-02-08 06:18:50.728930 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:18:50.728940 | orchestrator | 2026-02-08 06:18:50.728951 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-08 06:18:50.728960 | orchestrator | Sunday 08 February 2026 06:18:39 +0000 (0:00:01.230) 0:27:37.657 ******* 2026-02-08 06:18:50.729032 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-08 06:18:50.729046 | orchestrator | 2026-02-08 06:18:50.729056 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-08 06:18:50.729066 | orchestrator | 2026-02-08 06:18:50.729076 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:18:50.729085 | orchestrator | Sunday 08 February 2026 06:18:42 +0000 (0:00:02.476) 0:27:40.133 ******* 2026-02-08 06:18:50.729095 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-08 06:18:50.729105 | orchestrator | 2026-02-08 06:18:50.729116 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:18:50.729126 | orchestrator | Sunday 08 February 2026 06:18:42 +0000 (0:00:00.260) 0:27:40.394 ******* 2026-02-08 06:18:50.729137 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:50.729148 | orchestrator | 2026-02-08 06:18:50.729160 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:18:50.729169 | orchestrator | Sunday 08 February 2026 06:18:43 +0000 (0:00:00.755) 0:27:41.150 ******* 2026-02-08 06:18:50.729175 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:50.729181 | orchestrator | 2026-02-08 06:18:50.729188 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:18:50.729195 | orchestrator | Sunday 08 February 2026 06:18:43 +0000 (0:00:00.151) 0:27:41.301 ******* 2026-02-08 06:18:50.729201 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:50.729207 | orchestrator | 2026-02-08 06:18:50.729213 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:18:50.729220 | orchestrator | Sunday 08 February 2026 06:18:43 +0000 (0:00:00.470) 0:27:41.771 ******* 2026-02-08 06:18:50.729241 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:50.729247 | orchestrator | 2026-02-08 06:18:50.729254 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:18:50.729260 | orchestrator | Sunday 08 February 2026 06:18:43 +0000 (0:00:00.154) 0:27:41.926 ******* 2026-02-08 06:18:50.729266 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:50.729272 | orchestrator | 2026-02-08 06:18:50.729278 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:18:50.729284 | orchestrator | Sunday 08 February 2026 06:18:44 +0000 (0:00:00.144) 0:27:42.071 ******* 2026-02-08 06:18:50.729291 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:50.729297 | orchestrator | 2026-02-08 06:18:50.729303 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:18:50.729310 | orchestrator | Sunday 08 February 2026 06:18:44 +0000 (0:00:00.159) 0:27:42.230 ******* 2026-02-08 06:18:50.729316 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:50.729323 | orchestrator | 2026-02-08 06:18:50.729331 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:18:50.729338 | orchestrator | Sunday 08 February 2026 06:18:44 +0000 (0:00:00.174) 0:27:42.405 ******* 2026-02-08 06:18:50.729362 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:50.729369 | orchestrator | 2026-02-08 06:18:50.729377 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:18:50.729384 | orchestrator | Sunday 08 February 2026 06:18:44 +0000 (0:00:00.145) 0:27:42.550 ******* 2026-02-08 06:18:50.729392 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:18:50.729399 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:18:50.729406 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:18:50.729414 | orchestrator | 2026-02-08 06:18:50.729421 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:18:50.729428 | orchestrator | Sunday 08 February 2026 06:18:45 +0000 (0:00:01.038) 0:27:43.589 ******* 2026-02-08 06:18:50.729435 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:50.729442 | orchestrator | 2026-02-08 06:18:50.729450 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:18:50.729457 | orchestrator | Sunday 08 February 2026 06:18:45 +0000 (0:00:00.260) 0:27:43.850 ******* 2026-02-08 06:18:50.729464 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:18:50.729472 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:18:50.729479 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:18:50.729486 | orchestrator | 2026-02-08 06:18:50.729493 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:18:50.729500 | orchestrator | Sunday 08 February 2026 06:18:48 +0000 (0:00:02.259) 0:27:46.109 ******* 2026-02-08 06:18:50.729507 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-08 06:18:50.729515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-08 06:18:50.729523 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-08 06:18:50.729529 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:50.729537 | orchestrator | 2026-02-08 06:18:50.729544 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:18:50.729552 | orchestrator | Sunday 08 February 2026 06:18:48 +0000 (0:00:00.860) 0:27:46.970 ******* 2026-02-08 06:18:50.729576 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:18:50.729587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:18:50.729595 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:18:50.729603 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:50.729613 | orchestrator | 2026-02-08 06:18:50.729624 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:18:50.729635 | orchestrator | Sunday 08 February 2026 06:18:49 +0000 (0:00:00.984) 0:27:47.955 ******* 2026-02-08 06:18:50.729647 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:50.729662 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:50.729676 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:50.729684 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:50.729691 | orchestrator | 2026-02-08 06:18:50.729701 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:18:50.729710 | orchestrator | Sunday 08 February 2026 06:18:50 +0000 (0:00:00.579) 0:27:48.534 ******* 2026-02-08 06:18:50.729719 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:18:46.333522', 'end': '2026-02-08 06:18:46.386898', 'delta': '0:00:00.053376', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:18:50.729728 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:18:47.261924', 'end': '2026-02-08 06:18:47.322730', 'delta': '0:00:00.060806', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:18:50.729742 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:18:47.870903', 'end': '2026-02-08 06:18:47.925485', 'delta': '0:00:00.054582', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:18:54.607558 | orchestrator | 2026-02-08 06:18:54.607684 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:18:54.607699 | orchestrator | Sunday 08 February 2026 06:18:50 +0000 (0:00:00.230) 0:27:48.764 ******* 2026-02-08 06:18:54.607708 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:54.607718 | orchestrator | 2026-02-08 06:18:54.607727 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:18:54.607735 | orchestrator | Sunday 08 February 2026 06:18:51 +0000 (0:00:00.304) 0:27:49.069 ******* 2026-02-08 06:18:54.607776 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:54.607785 | orchestrator | 2026-02-08 06:18:54.607793 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:18:54.607800 | orchestrator | Sunday 08 February 2026 06:18:51 +0000 (0:00:00.252) 0:27:49.321 ******* 2026-02-08 06:18:54.607807 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:54.607815 | orchestrator | 2026-02-08 06:18:54.607822 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:18:54.607829 | orchestrator | Sunday 08 February 2026 06:18:51 +0000 (0:00:00.146) 0:27:49.468 ******* 2026-02-08 06:18:54.607837 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:18:54.607844 | orchestrator | 2026-02-08 06:18:54.607851 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:18:54.607858 | orchestrator | Sunday 08 February 2026 06:18:52 +0000 (0:00:00.962) 0:27:50.431 ******* 2026-02-08 06:18:54.607865 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:54.607872 | orchestrator | 2026-02-08 06:18:54.607879 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:18:54.607902 | orchestrator | Sunday 08 February 2026 06:18:52 +0000 (0:00:00.153) 0:27:50.585 ******* 2026-02-08 06:18:54.607909 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:54.607917 | orchestrator | 2026-02-08 06:18:54.607923 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:18:54.607930 | orchestrator | Sunday 08 February 2026 06:18:52 +0000 (0:00:00.126) 0:27:50.711 ******* 2026-02-08 06:18:54.607937 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:54.607944 | orchestrator | 2026-02-08 06:18:54.607950 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:18:54.607957 | orchestrator | Sunday 08 February 2026 06:18:52 +0000 (0:00:00.232) 0:27:50.944 ******* 2026-02-08 06:18:54.607964 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:54.607993 | orchestrator | 2026-02-08 06:18:54.608001 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:18:54.608008 | orchestrator | Sunday 08 February 2026 06:18:53 +0000 (0:00:00.148) 0:27:51.093 ******* 2026-02-08 06:18:54.608015 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:54.608022 | orchestrator | 2026-02-08 06:18:54.608028 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:18:54.608035 | orchestrator | Sunday 08 February 2026 06:18:53 +0000 (0:00:00.138) 0:27:51.231 ******* 2026-02-08 06:18:54.608042 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:54.608049 | orchestrator | 2026-02-08 06:18:54.608055 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:18:54.608062 | orchestrator | Sunday 08 February 2026 06:18:53 +0000 (0:00:00.175) 0:27:51.406 ******* 2026-02-08 06:18:54.608070 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:54.608078 | orchestrator | 2026-02-08 06:18:54.608085 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:18:54.608092 | orchestrator | Sunday 08 February 2026 06:18:53 +0000 (0:00:00.127) 0:27:51.534 ******* 2026-02-08 06:18:54.608100 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:54.608107 | orchestrator | 2026-02-08 06:18:54.608114 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:18:54.608121 | orchestrator | Sunday 08 February 2026 06:18:54 +0000 (0:00:00.527) 0:27:52.061 ******* 2026-02-08 06:18:54.608129 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:54.608136 | orchestrator | 2026-02-08 06:18:54.608143 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:18:54.608152 | orchestrator | Sunday 08 February 2026 06:18:54 +0000 (0:00:00.139) 0:27:52.200 ******* 2026-02-08 06:18:54.608159 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:18:54.608167 | orchestrator | 2026-02-08 06:18:54.608174 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:18:54.608181 | orchestrator | Sunday 08 February 2026 06:18:54 +0000 (0:00:00.170) 0:27:52.371 ******* 2026-02-08 06:18:54.608200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:18:54.608233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'uuids': ['4e6e3c31-d8d4-42a4-8244-dd9f1ae0f37a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm']}})  2026-02-08 06:18:54.608245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '33bf36ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:18:54.608259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc']}})  2026-02-08 06:18:54.608268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:18:54.608276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:18:54.608285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:18:54.608306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:18:54.608314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH', 'dm-uuid-CRYPT-LUKS2-d4dc753534094cc38b724c68e8894d37-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:18:54.608330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:18:54.989871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'uuids': ['d4dc7535-3409-4cc3-8b72-4c68e8894d37'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH']}})  2026-02-08 06:18:54.990008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046']}})  2026-02-08 06:18:54.990060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:18:54.990074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6152c601', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:18:54.990114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:18:54.990122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:18:54.990134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm', 'dm-uuid-CRYPT-LUKS2-4e6e3c31d8d442a48244dd9f1ae0f37a-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:18:54.990142 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:18:54.990150 | orchestrator | 2026-02-08 06:18:54.990158 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:18:54.990166 | orchestrator | Sunday 08 February 2026 06:18:54 +0000 (0:00:00.417) 0:27:52.788 ******* 2026-02-08 06:18:54.990173 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:54.990185 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046', 'dm-uuid-LVM-yHxwYWjXM5rCMy03I7W1d35MN3jq2cawRH7KXMBBm3D5PVB6eMaUeZw4tW6dlYUm'], 'uuids': ['4e6e3c31-d8d4-42a4-8244-dd9f1ae0f37a'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:54.990193 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133', 'scsi-SQEMU_QEMU_HARDDISK_33bf36ec-77e2-4563-8915-2d028f665133'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '33bf36ec', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:54.990206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-98ZUfd-U8I1-l7ve-H02s-xtrR-VP1N-Q4DCna', 'scsi-0QEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2', 'scsi-SQEMU_QEMU_HARDDISK_2c937877-c8d8-449b-a5f6-0239aca924e2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234531 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234556 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234562 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH', 'dm-uuid-CRYPT-LUKS2-d4dc753534094cc38b724c68e8894d37-4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234573 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1f36c880--548c--5a66--856f--2c4e799d94fc-osd--block--1f36c880--548c--5a66--856f--2c4e799d94fc', 'dm-uuid-LVM-EUBg1b33eC5p7VqB18wXpct56RBS9v7i4yMbT242rfLMHwz72rUJ4I0Vm8FBkphH'], 'uuids': ['d4dc7535-3409-4cc3-8b72-4c68e8894d37'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2c937877', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4yMbT2-42rf-LMHw-z72r-UJ4I-0Vm8-FBkphH']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234603 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-HUiqZb-DVo1-dI2F-bvgQ-kR6m-4ift-nZXZfU', 'scsi-0QEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea', 'scsi-SQEMU_QEMU_HARDDISK_e630d271-3aac-4ce5-a41f-fdcd87f60fea'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e630d271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--98a4cb59--dd7a--5ec9--b94d--174a40339046-osd--block--98a4cb59--dd7a--5ec9--b94d--174a40339046']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234615 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:18:55.234629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6152c601', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6152c601-f22c-4ab1-825c-0b7a8c2f9bf8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:19:04.417180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:19:04.417324 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:19:04.417342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm', 'dm-uuid-CRYPT-LUKS2-4e6e3c31d8d442a48244dd9f1ae0f37a-RH7KXM-BBm3-D5PV-B6eM-aUeZ-w4tW-6dlYUm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:19:04.417356 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.417369 | orchestrator | 2026-02-08 06:19:04.417383 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:19:04.417395 | orchestrator | Sunday 08 February 2026 06:18:55 +0000 (0:00:00.485) 0:27:53.273 ******* 2026-02-08 06:19:04.417406 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:04.417418 | orchestrator | 2026-02-08 06:19:04.417430 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:19:04.417441 | orchestrator | Sunday 08 February 2026 06:18:55 +0000 (0:00:00.472) 0:27:53.745 ******* 2026-02-08 06:19:04.417452 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:04.417463 | orchestrator | 2026-02-08 06:19:04.417474 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:19:04.417485 | orchestrator | Sunday 08 February 2026 06:18:55 +0000 (0:00:00.146) 0:27:53.892 ******* 2026-02-08 06:19:04.417496 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:04.417507 | orchestrator | 2026-02-08 06:19:04.417518 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:19:04.417529 | orchestrator | Sunday 08 February 2026 06:18:56 +0000 (0:00:00.513) 0:27:54.405 ******* 2026-02-08 06:19:04.417540 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.417551 | orchestrator | 2026-02-08 06:19:04.417562 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:19:04.417574 | orchestrator | Sunday 08 February 2026 06:18:56 +0000 (0:00:00.137) 0:27:54.542 ******* 2026-02-08 06:19:04.417585 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.417596 | orchestrator | 2026-02-08 06:19:04.417607 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:19:04.417618 | orchestrator | Sunday 08 February 2026 06:18:56 +0000 (0:00:00.244) 0:27:54.787 ******* 2026-02-08 06:19:04.417631 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.417644 | orchestrator | 2026-02-08 06:19:04.417657 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:19:04.417671 | orchestrator | Sunday 08 February 2026 06:18:56 +0000 (0:00:00.150) 0:27:54.937 ******* 2026-02-08 06:19:04.417684 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-08 06:19:04.417705 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-08 06:19:04.417720 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-08 06:19:04.417732 | orchestrator | 2026-02-08 06:19:04.417761 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:19:04.417775 | orchestrator | Sunday 08 February 2026 06:18:57 +0000 (0:00:01.044) 0:27:55.981 ******* 2026-02-08 06:19:04.417788 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-08 06:19:04.417801 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-08 06:19:04.417814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-08 06:19:04.417827 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.417840 | orchestrator | 2026-02-08 06:19:04.417854 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:19:04.417867 | orchestrator | Sunday 08 February 2026 06:18:58 +0000 (0:00:00.174) 0:27:56.155 ******* 2026-02-08 06:19:04.417897 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-08 06:19:04.417911 | orchestrator | 2026-02-08 06:19:04.417925 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:19:04.417939 | orchestrator | Sunday 08 February 2026 06:18:58 +0000 (0:00:00.566) 0:27:56.722 ******* 2026-02-08 06:19:04.417952 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.417965 | orchestrator | 2026-02-08 06:19:04.418139 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:19:04.418164 | orchestrator | Sunday 08 February 2026 06:18:58 +0000 (0:00:00.151) 0:27:56.874 ******* 2026-02-08 06:19:04.418182 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.418200 | orchestrator | 2026-02-08 06:19:04.418219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:19:04.418238 | orchestrator | Sunday 08 February 2026 06:18:58 +0000 (0:00:00.156) 0:27:57.030 ******* 2026-02-08 06:19:04.418257 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.418277 | orchestrator | 2026-02-08 06:19:04.418296 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:19:04.418314 | orchestrator | Sunday 08 February 2026 06:18:59 +0000 (0:00:00.190) 0:27:57.220 ******* 2026-02-08 06:19:04.418333 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:04.418345 | orchestrator | 2026-02-08 06:19:04.418362 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:19:04.418380 | orchestrator | Sunday 08 February 2026 06:18:59 +0000 (0:00:00.257) 0:27:57.478 ******* 2026-02-08 06:19:04.418399 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:19:04.418417 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:19:04.418435 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:19:04.418454 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.418473 | orchestrator | 2026-02-08 06:19:04.418493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:19:04.418511 | orchestrator | Sunday 08 February 2026 06:18:59 +0000 (0:00:00.445) 0:27:57.924 ******* 2026-02-08 06:19:04.418530 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:19:04.418542 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:19:04.418553 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:19:04.418564 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.418575 | orchestrator | 2026-02-08 06:19:04.418586 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:19:04.418597 | orchestrator | Sunday 08 February 2026 06:19:00 +0000 (0:00:00.432) 0:27:58.357 ******* 2026-02-08 06:19:04.418607 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:19:04.418618 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:19:04.418656 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:19:04.418679 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:04.418690 | orchestrator | 2026-02-08 06:19:04.418701 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:19:04.418712 | orchestrator | Sunday 08 February 2026 06:19:00 +0000 (0:00:00.437) 0:27:58.794 ******* 2026-02-08 06:19:04.418723 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:04.418733 | orchestrator | 2026-02-08 06:19:04.418744 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:19:04.418755 | orchestrator | Sunday 08 February 2026 06:19:00 +0000 (0:00:00.196) 0:27:58.991 ******* 2026-02-08 06:19:04.418766 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 06:19:04.418777 | orchestrator | 2026-02-08 06:19:04.418788 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:19:04.418799 | orchestrator | Sunday 08 February 2026 06:19:01 +0000 (0:00:00.357) 0:27:59.348 ******* 2026-02-08 06:19:04.418810 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:19:04.418821 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:19:04.418832 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:19:04.418843 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:19:04.418854 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-08 06:19:04.418865 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:19:04.418876 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:19:04.418887 | orchestrator | 2026-02-08 06:19:04.418898 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:19:04.418908 | orchestrator | Sunday 08 February 2026 06:19:02 +0000 (0:00:01.119) 0:28:00.468 ******* 2026-02-08 06:19:04.418927 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:19:04.418938 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:19:04.418949 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:19:04.418960 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:19:04.418971 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-08 06:19:04.419004 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-08 06:19:04.419015 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:19:04.419026 | orchestrator | 2026-02-08 06:19:04.419049 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-08 06:19:19.542204 | orchestrator | Sunday 08 February 2026 06:19:04 +0000 (0:00:01.979) 0:28:02.447 ******* 2026-02-08 06:19:19.542319 | orchestrator | changed: [testbed-node-4] 2026-02-08 06:19:19.542336 | orchestrator | 2026-02-08 06:19:19.542349 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-08 06:19:19.542359 | orchestrator | Sunday 08 February 2026 06:19:05 +0000 (0:00:01.256) 0:28:03.704 ******* 2026-02-08 06:19:19.542371 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:19:19.542382 | orchestrator | 2026-02-08 06:19:19.542393 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-08 06:19:19.542404 | orchestrator | Sunday 08 February 2026 06:19:07 +0000 (0:00:01.899) 0:28:05.604 ******* 2026-02-08 06:19:19.542414 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:19:19.542480 | orchestrator | 2026-02-08 06:19:19.542517 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:19:19.542528 | orchestrator | Sunday 08 February 2026 06:19:08 +0000 (0:00:01.289) 0:28:06.894 ******* 2026-02-08 06:19:19.542538 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-08 06:19:19.542548 | orchestrator | 2026-02-08 06:19:19.542558 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:19:19.542567 | orchestrator | Sunday 08 February 2026 06:19:09 +0000 (0:00:00.225) 0:28:07.119 ******* 2026-02-08 06:19:19.542577 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-08 06:19:19.542587 | orchestrator | 2026-02-08 06:19:19.542597 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:19:19.542606 | orchestrator | Sunday 08 February 2026 06:19:09 +0000 (0:00:00.220) 0:28:07.340 ******* 2026-02-08 06:19:19.542616 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.542630 | orchestrator | 2026-02-08 06:19:19.542647 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:19:19.542658 | orchestrator | Sunday 08 February 2026 06:19:09 +0000 (0:00:00.143) 0:28:07.484 ******* 2026-02-08 06:19:19.542668 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.542679 | orchestrator | 2026-02-08 06:19:19.542689 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:19:19.542699 | orchestrator | Sunday 08 February 2026 06:19:10 +0000 (0:00:00.583) 0:28:08.067 ******* 2026-02-08 06:19:19.542709 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.542719 | orchestrator | 2026-02-08 06:19:19.542729 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:19:19.542738 | orchestrator | Sunday 08 February 2026 06:19:10 +0000 (0:00:00.511) 0:28:08.579 ******* 2026-02-08 06:19:19.542747 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.542759 | orchestrator | 2026-02-08 06:19:19.542770 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:19:19.542781 | orchestrator | Sunday 08 February 2026 06:19:11 +0000 (0:00:00.556) 0:28:09.136 ******* 2026-02-08 06:19:19.542793 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.542804 | orchestrator | 2026-02-08 06:19:19.542815 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:19:19.542827 | orchestrator | Sunday 08 February 2026 06:19:11 +0000 (0:00:00.124) 0:28:09.261 ******* 2026-02-08 06:19:19.542839 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.542850 | orchestrator | 2026-02-08 06:19:19.542860 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:19:19.542870 | orchestrator | Sunday 08 February 2026 06:19:11 +0000 (0:00:00.444) 0:28:09.705 ******* 2026-02-08 06:19:19.542879 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.542889 | orchestrator | 2026-02-08 06:19:19.542898 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:19:19.542908 | orchestrator | Sunday 08 February 2026 06:19:11 +0000 (0:00:00.150) 0:28:09.855 ******* 2026-02-08 06:19:19.542917 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.542927 | orchestrator | 2026-02-08 06:19:19.542937 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:19:19.542952 | orchestrator | Sunday 08 February 2026 06:19:12 +0000 (0:00:00.552) 0:28:10.408 ******* 2026-02-08 06:19:19.542968 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.543008 | orchestrator | 2026-02-08 06:19:19.543024 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:19:19.543040 | orchestrator | Sunday 08 February 2026 06:19:12 +0000 (0:00:00.543) 0:28:10.951 ******* 2026-02-08 06:19:19.543055 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543070 | orchestrator | 2026-02-08 06:19:19.543084 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:19:19.543099 | orchestrator | Sunday 08 February 2026 06:19:13 +0000 (0:00:00.149) 0:28:11.101 ******* 2026-02-08 06:19:19.543124 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543140 | orchestrator | 2026-02-08 06:19:19.543170 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:19:19.543187 | orchestrator | Sunday 08 February 2026 06:19:13 +0000 (0:00:00.126) 0:28:11.228 ******* 2026-02-08 06:19:19.543203 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.543219 | orchestrator | 2026-02-08 06:19:19.543234 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:19:19.543251 | orchestrator | Sunday 08 February 2026 06:19:13 +0000 (0:00:00.170) 0:28:11.398 ******* 2026-02-08 06:19:19.543268 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.543285 | orchestrator | 2026-02-08 06:19:19.543298 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:19:19.543308 | orchestrator | Sunday 08 February 2026 06:19:13 +0000 (0:00:00.157) 0:28:11.556 ******* 2026-02-08 06:19:19.543317 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.543327 | orchestrator | 2026-02-08 06:19:19.543356 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:19:19.543366 | orchestrator | Sunday 08 February 2026 06:19:13 +0000 (0:00:00.156) 0:28:11.712 ******* 2026-02-08 06:19:19.543376 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543386 | orchestrator | 2026-02-08 06:19:19.543396 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:19:19.543406 | orchestrator | Sunday 08 February 2026 06:19:13 +0000 (0:00:00.140) 0:28:11.853 ******* 2026-02-08 06:19:19.543415 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543425 | orchestrator | 2026-02-08 06:19:19.543435 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:19:19.543444 | orchestrator | Sunday 08 February 2026 06:19:13 +0000 (0:00:00.136) 0:28:11.989 ******* 2026-02-08 06:19:19.543454 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543464 | orchestrator | 2026-02-08 06:19:19.543473 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:19:19.543483 | orchestrator | Sunday 08 February 2026 06:19:14 +0000 (0:00:00.139) 0:28:12.129 ******* 2026-02-08 06:19:19.543492 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.543502 | orchestrator | 2026-02-08 06:19:19.543512 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:19:19.543521 | orchestrator | Sunday 08 February 2026 06:19:14 +0000 (0:00:00.148) 0:28:12.278 ******* 2026-02-08 06:19:19.543531 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.543540 | orchestrator | 2026-02-08 06:19:19.543550 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:19:19.543559 | orchestrator | Sunday 08 February 2026 06:19:14 +0000 (0:00:00.627) 0:28:12.905 ******* 2026-02-08 06:19:19.543569 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543578 | orchestrator | 2026-02-08 06:19:19.543588 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:19:19.543597 | orchestrator | Sunday 08 February 2026 06:19:14 +0000 (0:00:00.135) 0:28:13.040 ******* 2026-02-08 06:19:19.543607 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543617 | orchestrator | 2026-02-08 06:19:19.543626 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:19:19.543636 | orchestrator | Sunday 08 February 2026 06:19:15 +0000 (0:00:00.207) 0:28:13.248 ******* 2026-02-08 06:19:19.543645 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543655 | orchestrator | 2026-02-08 06:19:19.543665 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:19:19.543674 | orchestrator | Sunday 08 February 2026 06:19:15 +0000 (0:00:00.168) 0:28:13.417 ******* 2026-02-08 06:19:19.543683 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543693 | orchestrator | 2026-02-08 06:19:19.543703 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:19:19.543713 | orchestrator | Sunday 08 February 2026 06:19:15 +0000 (0:00:00.140) 0:28:13.558 ******* 2026-02-08 06:19:19.543731 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543741 | orchestrator | 2026-02-08 06:19:19.543751 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:19:19.543760 | orchestrator | Sunday 08 February 2026 06:19:15 +0000 (0:00:00.155) 0:28:13.713 ******* 2026-02-08 06:19:19.543770 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543779 | orchestrator | 2026-02-08 06:19:19.543789 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:19:19.543798 | orchestrator | Sunday 08 February 2026 06:19:15 +0000 (0:00:00.153) 0:28:13.867 ******* 2026-02-08 06:19:19.543808 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543818 | orchestrator | 2026-02-08 06:19:19.543827 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:19:19.543838 | orchestrator | Sunday 08 February 2026 06:19:15 +0000 (0:00:00.149) 0:28:14.016 ******* 2026-02-08 06:19:19.543848 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543857 | orchestrator | 2026-02-08 06:19:19.543867 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:19:19.543877 | orchestrator | Sunday 08 February 2026 06:19:16 +0000 (0:00:00.161) 0:28:14.178 ******* 2026-02-08 06:19:19.543886 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543896 | orchestrator | 2026-02-08 06:19:19.543906 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:19:19.543915 | orchestrator | Sunday 08 February 2026 06:19:16 +0000 (0:00:00.151) 0:28:14.329 ******* 2026-02-08 06:19:19.543925 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543935 | orchestrator | 2026-02-08 06:19:19.543944 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:19:19.543954 | orchestrator | Sunday 08 February 2026 06:19:16 +0000 (0:00:00.138) 0:28:14.468 ******* 2026-02-08 06:19:19.543963 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.543973 | orchestrator | 2026-02-08 06:19:19.544011 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:19:19.544029 | orchestrator | Sunday 08 February 2026 06:19:16 +0000 (0:00:00.163) 0:28:14.631 ******* 2026-02-08 06:19:19.544045 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:19.544061 | orchestrator | 2026-02-08 06:19:19.544072 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:19:19.544088 | orchestrator | Sunday 08 February 2026 06:19:17 +0000 (0:00:00.554) 0:28:15.185 ******* 2026-02-08 06:19:19.544098 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.544108 | orchestrator | 2026-02-08 06:19:19.544117 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:19:19.544127 | orchestrator | Sunday 08 February 2026 06:19:18 +0000 (0:00:00.927) 0:28:16.113 ******* 2026-02-08 06:19:19.544137 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:19.544146 | orchestrator | 2026-02-08 06:19:19.544156 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:19:19.544165 | orchestrator | Sunday 08 February 2026 06:19:19 +0000 (0:00:01.239) 0:28:17.352 ******* 2026-02-08 06:19:19.544175 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-08 06:19:19.544185 | orchestrator | 2026-02-08 06:19:19.544195 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:19:19.544212 | orchestrator | Sunday 08 February 2026 06:19:19 +0000 (0:00:00.226) 0:28:17.578 ******* 2026-02-08 06:19:35.878464 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878545 | orchestrator | 2026-02-08 06:19:35.878555 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:19:35.878562 | orchestrator | Sunday 08 February 2026 06:19:19 +0000 (0:00:00.148) 0:28:17.727 ******* 2026-02-08 06:19:35.878568 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878574 | orchestrator | 2026-02-08 06:19:35.878580 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:19:35.878601 | orchestrator | Sunday 08 February 2026 06:19:19 +0000 (0:00:00.150) 0:28:17.877 ******* 2026-02-08 06:19:35.878607 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:19:35.878613 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:19:35.878620 | orchestrator | 2026-02-08 06:19:35.878625 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:19:35.878631 | orchestrator | Sunday 08 February 2026 06:19:20 +0000 (0:00:00.790) 0:28:18.668 ******* 2026-02-08 06:19:35.878636 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:35.878643 | orchestrator | 2026-02-08 06:19:35.878648 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:19:35.878654 | orchestrator | Sunday 08 February 2026 06:19:21 +0000 (0:00:00.484) 0:28:19.152 ******* 2026-02-08 06:19:35.878659 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878665 | orchestrator | 2026-02-08 06:19:35.878670 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:19:35.878676 | orchestrator | Sunday 08 February 2026 06:19:21 +0000 (0:00:00.150) 0:28:19.303 ******* 2026-02-08 06:19:35.878681 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878686 | orchestrator | 2026-02-08 06:19:35.878692 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:19:35.878697 | orchestrator | Sunday 08 February 2026 06:19:21 +0000 (0:00:00.176) 0:28:19.479 ******* 2026-02-08 06:19:35.878703 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878708 | orchestrator | 2026-02-08 06:19:35.878713 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:19:35.878719 | orchestrator | Sunday 08 February 2026 06:19:21 +0000 (0:00:00.146) 0:28:19.626 ******* 2026-02-08 06:19:35.878724 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-08 06:19:35.878730 | orchestrator | 2026-02-08 06:19:35.878735 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:19:35.878740 | orchestrator | Sunday 08 February 2026 06:19:22 +0000 (0:00:00.769) 0:28:20.396 ******* 2026-02-08 06:19:35.878746 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:35.878751 | orchestrator | 2026-02-08 06:19:35.878757 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:19:35.878762 | orchestrator | Sunday 08 February 2026 06:19:23 +0000 (0:00:00.709) 0:28:21.105 ******* 2026-02-08 06:19:35.878768 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:19:35.878773 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:19:35.878779 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:19:35.878784 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878790 | orchestrator | 2026-02-08 06:19:35.878795 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:19:35.878801 | orchestrator | Sunday 08 February 2026 06:19:23 +0000 (0:00:00.153) 0:28:21.258 ******* 2026-02-08 06:19:35.878806 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878811 | orchestrator | 2026-02-08 06:19:35.878817 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:19:35.878822 | orchestrator | Sunday 08 February 2026 06:19:23 +0000 (0:00:00.148) 0:28:21.407 ******* 2026-02-08 06:19:35.878827 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878833 | orchestrator | 2026-02-08 06:19:35.878838 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:19:35.878843 | orchestrator | Sunday 08 February 2026 06:19:23 +0000 (0:00:00.175) 0:28:21.582 ******* 2026-02-08 06:19:35.878849 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878854 | orchestrator | 2026-02-08 06:19:35.878859 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:19:35.878865 | orchestrator | Sunday 08 February 2026 06:19:23 +0000 (0:00:00.158) 0:28:21.741 ******* 2026-02-08 06:19:35.878874 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878880 | orchestrator | 2026-02-08 06:19:35.878885 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:19:35.878891 | orchestrator | Sunday 08 February 2026 06:19:23 +0000 (0:00:00.152) 0:28:21.893 ******* 2026-02-08 06:19:35.878896 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.878901 | orchestrator | 2026-02-08 06:19:35.878907 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:19:35.878922 | orchestrator | Sunday 08 February 2026 06:19:24 +0000 (0:00:00.188) 0:28:22.081 ******* 2026-02-08 06:19:35.878928 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:35.878933 | orchestrator | 2026-02-08 06:19:35.878939 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:19:35.878944 | orchestrator | Sunday 08 February 2026 06:19:25 +0000 (0:00:01.517) 0:28:23.599 ******* 2026-02-08 06:19:35.878949 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:35.878955 | orchestrator | 2026-02-08 06:19:35.878960 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:19:35.878966 | orchestrator | Sunday 08 February 2026 06:19:25 +0000 (0:00:00.166) 0:28:23.766 ******* 2026-02-08 06:19:35.878971 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-08 06:19:35.878976 | orchestrator | 2026-02-08 06:19:35.879029 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:19:35.879047 | orchestrator | Sunday 08 February 2026 06:19:25 +0000 (0:00:00.267) 0:28:24.034 ******* 2026-02-08 06:19:35.879054 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.879060 | orchestrator | 2026-02-08 06:19:35.879067 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:19:35.879073 | orchestrator | Sunday 08 February 2026 06:19:26 +0000 (0:00:00.177) 0:28:24.211 ******* 2026-02-08 06:19:35.879080 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.879086 | orchestrator | 2026-02-08 06:19:35.879093 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:19:35.879099 | orchestrator | Sunday 08 February 2026 06:19:26 +0000 (0:00:00.469) 0:28:24.681 ******* 2026-02-08 06:19:35.879105 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.879112 | orchestrator | 2026-02-08 06:19:35.879118 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:19:35.879125 | orchestrator | Sunday 08 February 2026 06:19:26 +0000 (0:00:00.170) 0:28:24.851 ******* 2026-02-08 06:19:35.879131 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.879138 | orchestrator | 2026-02-08 06:19:35.879144 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:19:35.879150 | orchestrator | Sunday 08 February 2026 06:19:26 +0000 (0:00:00.179) 0:28:25.031 ******* 2026-02-08 06:19:35.879156 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.879163 | orchestrator | 2026-02-08 06:19:35.879169 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:19:35.879175 | orchestrator | Sunday 08 February 2026 06:19:27 +0000 (0:00:00.164) 0:28:25.195 ******* 2026-02-08 06:19:35.879181 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.879188 | orchestrator | 2026-02-08 06:19:35.879194 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:19:35.879201 | orchestrator | Sunday 08 February 2026 06:19:27 +0000 (0:00:00.171) 0:28:25.367 ******* 2026-02-08 06:19:35.879207 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.879213 | orchestrator | 2026-02-08 06:19:35.879219 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:19:35.879226 | orchestrator | Sunday 08 February 2026 06:19:27 +0000 (0:00:00.171) 0:28:25.539 ******* 2026-02-08 06:19:35.879233 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:35.879239 | orchestrator | 2026-02-08 06:19:35.879245 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:19:35.879256 | orchestrator | Sunday 08 February 2026 06:19:27 +0000 (0:00:00.207) 0:28:25.746 ******* 2026-02-08 06:19:35.879262 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:35.879269 | orchestrator | 2026-02-08 06:19:35.879275 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:19:35.879282 | orchestrator | Sunday 08 February 2026 06:19:27 +0000 (0:00:00.287) 0:28:26.034 ******* 2026-02-08 06:19:35.879288 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-08 06:19:35.879295 | orchestrator | 2026-02-08 06:19:35.879301 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:19:35.879308 | orchestrator | Sunday 08 February 2026 06:19:28 +0000 (0:00:00.259) 0:28:26.293 ******* 2026-02-08 06:19:35.879314 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-08 06:19:35.879320 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-08 06:19:35.879325 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-08 06:19:35.879331 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-08 06:19:35.879336 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-08 06:19:35.879342 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-08 06:19:35.879347 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-08 06:19:35.879353 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:19:35.879359 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:19:35.879364 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:19:35.879369 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:19:35.879375 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:19:35.879380 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:19:35.879386 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:19:35.879391 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-08 06:19:35.879397 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-08 06:19:35.879402 | orchestrator | 2026-02-08 06:19:35.879407 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:19:35.879413 | orchestrator | Sunday 08 February 2026 06:19:33 +0000 (0:00:05.544) 0:28:31.837 ******* 2026-02-08 06:19:35.879418 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-08 06:19:35.879424 | orchestrator | 2026-02-08 06:19:35.879433 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-08 06:19:35.879439 | orchestrator | Sunday 08 February 2026 06:19:34 +0000 (0:00:00.225) 0:28:32.063 ******* 2026-02-08 06:19:35.879445 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:19:35.879451 | orchestrator | 2026-02-08 06:19:35.879457 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-08 06:19:35.879462 | orchestrator | Sunday 08 February 2026 06:19:34 +0000 (0:00:00.876) 0:28:32.940 ******* 2026-02-08 06:19:35.879467 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:19:35.879473 | orchestrator | 2026-02-08 06:19:35.879479 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:19:35.879488 | orchestrator | Sunday 08 February 2026 06:19:35 +0000 (0:00:00.968) 0:28:33.909 ******* 2026-02-08 06:19:55.631405 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631515 | orchestrator | 2026-02-08 06:19:55.631532 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:19:55.631545 | orchestrator | Sunday 08 February 2026 06:19:36 +0000 (0:00:00.164) 0:28:34.074 ******* 2026-02-08 06:19:55.631556 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631590 | orchestrator | 2026-02-08 06:19:55.631602 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:19:55.631614 | orchestrator | Sunday 08 February 2026 06:19:36 +0000 (0:00:00.145) 0:28:34.219 ******* 2026-02-08 06:19:55.631624 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631635 | orchestrator | 2026-02-08 06:19:55.631646 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:19:55.631657 | orchestrator | Sunday 08 February 2026 06:19:36 +0000 (0:00:00.167) 0:28:34.387 ******* 2026-02-08 06:19:55.631667 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631678 | orchestrator | 2026-02-08 06:19:55.631689 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:19:55.631700 | orchestrator | Sunday 08 February 2026 06:19:36 +0000 (0:00:00.133) 0:28:34.521 ******* 2026-02-08 06:19:55.631710 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631721 | orchestrator | 2026-02-08 06:19:55.631732 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:19:55.631744 | orchestrator | Sunday 08 February 2026 06:19:36 +0000 (0:00:00.149) 0:28:34.671 ******* 2026-02-08 06:19:55.631754 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631765 | orchestrator | 2026-02-08 06:19:55.631776 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:19:55.631787 | orchestrator | Sunday 08 February 2026 06:19:36 +0000 (0:00:00.133) 0:28:34.804 ******* 2026-02-08 06:19:55.631798 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631808 | orchestrator | 2026-02-08 06:19:55.631819 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:19:55.631830 | orchestrator | Sunday 08 February 2026 06:19:36 +0000 (0:00:00.158) 0:28:34.962 ******* 2026-02-08 06:19:55.631842 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631853 | orchestrator | 2026-02-08 06:19:55.631864 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:19:55.631875 | orchestrator | Sunday 08 February 2026 06:19:37 +0000 (0:00:00.160) 0:28:35.123 ******* 2026-02-08 06:19:55.631885 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631896 | orchestrator | 2026-02-08 06:19:55.631907 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:19:55.631918 | orchestrator | Sunday 08 February 2026 06:19:37 +0000 (0:00:00.139) 0:28:35.262 ******* 2026-02-08 06:19:55.631928 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.631939 | orchestrator | 2026-02-08 06:19:55.631950 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:19:55.631961 | orchestrator | Sunday 08 February 2026 06:19:37 +0000 (0:00:00.146) 0:28:35.408 ******* 2026-02-08 06:19:55.631971 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.632008 | orchestrator | 2026-02-08 06:19:55.632022 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:19:55.632032 | orchestrator | Sunday 08 February 2026 06:19:37 +0000 (0:00:00.156) 0:28:35.565 ******* 2026-02-08 06:19:55.632043 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-08 06:19:55.632054 | orchestrator | 2026-02-08 06:19:55.632065 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:19:55.632075 | orchestrator | Sunday 08 February 2026 06:19:41 +0000 (0:00:04.112) 0:28:39.678 ******* 2026-02-08 06:19:55.632087 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:19:55.632098 | orchestrator | 2026-02-08 06:19:55.632109 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:19:55.632120 | orchestrator | Sunday 08 February 2026 06:19:41 +0000 (0:00:00.205) 0:28:39.884 ******* 2026-02-08 06:19:55.632133 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-08 06:19:55.632157 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-08 06:19:55.632170 | orchestrator | 2026-02-08 06:19:55.632181 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:19:55.632192 | orchestrator | Sunday 08 February 2026 06:19:45 +0000 (0:00:03.766) 0:28:43.650 ******* 2026-02-08 06:19:55.632203 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.632213 | orchestrator | 2026-02-08 06:19:55.632225 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:19:55.632236 | orchestrator | Sunday 08 February 2026 06:19:45 +0000 (0:00:00.138) 0:28:43.789 ******* 2026-02-08 06:19:55.632247 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.632258 | orchestrator | 2026-02-08 06:19:55.632268 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:19:55.632297 | orchestrator | Sunday 08 February 2026 06:19:45 +0000 (0:00:00.130) 0:28:43.919 ******* 2026-02-08 06:19:55.632309 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.632319 | orchestrator | 2026-02-08 06:19:55.632330 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:19:55.632341 | orchestrator | Sunday 08 February 2026 06:19:46 +0000 (0:00:00.168) 0:28:44.088 ******* 2026-02-08 06:19:55.632352 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.632370 | orchestrator | 2026-02-08 06:19:55.632388 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:19:55.632415 | orchestrator | Sunday 08 February 2026 06:19:46 +0000 (0:00:00.159) 0:28:44.248 ******* 2026-02-08 06:19:55.632435 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.632452 | orchestrator | 2026-02-08 06:19:55.632469 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:19:55.632485 | orchestrator | Sunday 08 February 2026 06:19:46 +0000 (0:00:00.159) 0:28:44.408 ******* 2026-02-08 06:19:55.632502 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:55.632520 | orchestrator | 2026-02-08 06:19:55.632537 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:19:55.632554 | orchestrator | Sunday 08 February 2026 06:19:46 +0000 (0:00:00.284) 0:28:44.692 ******* 2026-02-08 06:19:55.632571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:19:55.632589 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:19:55.632607 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:19:55.632626 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.632644 | orchestrator | 2026-02-08 06:19:55.632663 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:19:55.632676 | orchestrator | Sunday 08 February 2026 06:19:47 +0000 (0:00:00.459) 0:28:45.151 ******* 2026-02-08 06:19:55.632687 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:19:55.632748 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:19:55.632760 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:19:55.632771 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.632782 | orchestrator | 2026-02-08 06:19:55.632792 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:19:55.632803 | orchestrator | Sunday 08 February 2026 06:19:47 +0000 (0:00:00.460) 0:28:45.612 ******* 2026-02-08 06:19:55.632814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-08 06:19:55.632836 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-08 06:19:55.632846 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-08 06:19:55.632857 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.632868 | orchestrator | 2026-02-08 06:19:55.632879 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:19:55.632889 | orchestrator | Sunday 08 February 2026 06:19:48 +0000 (0:00:01.064) 0:28:46.676 ******* 2026-02-08 06:19:55.632900 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:55.632911 | orchestrator | 2026-02-08 06:19:55.632921 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:19:55.632932 | orchestrator | Sunday 08 February 2026 06:19:48 +0000 (0:00:00.210) 0:28:46.887 ******* 2026-02-08 06:19:55.632943 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-08 06:19:55.632953 | orchestrator | 2026-02-08 06:19:55.632964 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:19:55.632975 | orchestrator | Sunday 08 February 2026 06:19:50 +0000 (0:00:01.406) 0:28:48.293 ******* 2026-02-08 06:19:55.633025 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:55.633039 | orchestrator | 2026-02-08 06:19:55.633050 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-08 06:19:55.633061 | orchestrator | Sunday 08 February 2026 06:19:51 +0000 (0:00:00.833) 0:28:49.127 ******* 2026-02-08 06:19:55.633072 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-08 06:19:55.633083 | orchestrator | 2026-02-08 06:19:55.633094 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-08 06:19:55.633105 | orchestrator | Sunday 08 February 2026 06:19:51 +0000 (0:00:00.233) 0:28:49.360 ******* 2026-02-08 06:19:55.633116 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:19:55.633126 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-08 06:19:55.633137 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:19:55.633148 | orchestrator | 2026-02-08 06:19:55.633159 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:19:55.633170 | orchestrator | Sunday 08 February 2026 06:19:53 +0000 (0:00:02.327) 0:28:51.687 ******* 2026-02-08 06:19:55.633181 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-08 06:19:55.633191 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-08 06:19:55.633202 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:19:55.633213 | orchestrator | 2026-02-08 06:19:55.633230 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-08 06:19:55.633241 | orchestrator | Sunday 08 February 2026 06:19:54 +0000 (0:00:00.982) 0:28:52.670 ******* 2026-02-08 06:19:55.633252 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:19:55.633263 | orchestrator | 2026-02-08 06:19:55.633274 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-08 06:19:55.633284 | orchestrator | Sunday 08 February 2026 06:19:54 +0000 (0:00:00.123) 0:28:52.794 ******* 2026-02-08 06:19:55.633295 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-08 06:19:55.633307 | orchestrator | 2026-02-08 06:19:55.633317 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-08 06:19:55.633328 | orchestrator | Sunday 08 February 2026 06:19:54 +0000 (0:00:00.215) 0:28:53.010 ******* 2026-02-08 06:19:55.633351 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:20:46.389427 | orchestrator | 2026-02-08 06:20:46.389544 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-08 06:20:46.389563 | orchestrator | Sunday 08 February 2026 06:19:55 +0000 (0:00:00.660) 0:28:53.670 ******* 2026-02-08 06:20:46.389575 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:20:46.389610 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-08 06:20:46.389629 | orchestrator | 2026-02-08 06:20:46.389649 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-08 06:20:46.389667 | orchestrator | Sunday 08 February 2026 06:19:59 +0000 (0:00:04.308) 0:28:57.979 ******* 2026-02-08 06:20:46.389686 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:20:46.389705 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:20:46.389725 | orchestrator | 2026-02-08 06:20:46.389738 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:20:46.389749 | orchestrator | Sunday 08 February 2026 06:20:02 +0000 (0:00:02.877) 0:29:00.856 ******* 2026-02-08 06:20:46.389760 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-08 06:20:46.389772 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:20:46.389783 | orchestrator | 2026-02-08 06:20:46.389794 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-08 06:20:46.389805 | orchestrator | Sunday 08 February 2026 06:20:03 +0000 (0:00:00.982) 0:29:01.839 ******* 2026-02-08 06:20:46.389816 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-08 06:20:46.389827 | orchestrator | 2026-02-08 06:20:46.389838 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-08 06:20:46.389849 | orchestrator | Sunday 08 February 2026 06:20:04 +0000 (0:00:00.222) 0:29:02.061 ******* 2026-02-08 06:20:46.389860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.389871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.389883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.389894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.389904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.389915 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:20:46.389927 | orchestrator | 2026-02-08 06:20:46.389937 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-08 06:20:46.389948 | orchestrator | Sunday 08 February 2026 06:20:04 +0000 (0:00:00.616) 0:29:02.677 ******* 2026-02-08 06:20:46.389959 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.389970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.389981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.390084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.390103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:20:46.390114 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:20:46.390125 | orchestrator | 2026-02-08 06:20:46.390136 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-08 06:20:46.390147 | orchestrator | Sunday 08 February 2026 06:20:05 +0000 (0:00:00.662) 0:29:03.340 ******* 2026-02-08 06:20:46.390158 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:20:46.390180 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:20:46.390191 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:20:46.390212 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:20:46.390224 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:20:46.390235 | orchestrator | 2026-02-08 06:20:46.390246 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-08 06:20:46.390277 | orchestrator | Sunday 08 February 2026 06:20:36 +0000 (0:00:31.147) 0:29:34.487 ******* 2026-02-08 06:20:46.390289 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:20:46.390299 | orchestrator | 2026-02-08 06:20:46.390311 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-08 06:20:46.390322 | orchestrator | Sunday 08 February 2026 06:20:36 +0000 (0:00:00.146) 0:29:34.634 ******* 2026-02-08 06:20:46.390332 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:20:46.390343 | orchestrator | 2026-02-08 06:20:46.390354 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-08 06:20:46.390365 | orchestrator | Sunday 08 February 2026 06:20:36 +0000 (0:00:00.155) 0:29:34.790 ******* 2026-02-08 06:20:46.390377 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-08 06:20:46.390388 | orchestrator | 2026-02-08 06:20:46.390399 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-08 06:20:46.390410 | orchestrator | Sunday 08 February 2026 06:20:36 +0000 (0:00:00.246) 0:29:35.036 ******* 2026-02-08 06:20:46.390421 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-08 06:20:46.390432 | orchestrator | 2026-02-08 06:20:46.390443 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-08 06:20:46.390454 | orchestrator | Sunday 08 February 2026 06:20:37 +0000 (0:00:00.208) 0:29:35.245 ******* 2026-02-08 06:20:46.390465 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:20:46.390476 | orchestrator | 2026-02-08 06:20:46.390487 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-08 06:20:46.390498 | orchestrator | Sunday 08 February 2026 06:20:38 +0000 (0:00:01.065) 0:29:36.311 ******* 2026-02-08 06:20:46.390509 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:20:46.390520 | orchestrator | 2026-02-08 06:20:46.390531 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-08 06:20:46.390542 | orchestrator | Sunday 08 February 2026 06:20:39 +0000 (0:00:01.211) 0:29:37.523 ******* 2026-02-08 06:20:46.390553 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:20:46.390564 | orchestrator | 2026-02-08 06:20:46.390575 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-08 06:20:46.390586 | orchestrator | Sunday 08 February 2026 06:20:40 +0000 (0:00:01.238) 0:29:38.762 ******* 2026-02-08 06:20:46.390597 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-08 06:20:46.390608 | orchestrator | 2026-02-08 06:20:46.390619 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-08 06:20:46.390630 | orchestrator | 2026-02-08 06:20:46.390641 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:20:46.390652 | orchestrator | Sunday 08 February 2026 06:20:43 +0000 (0:00:02.410) 0:29:41.173 ******* 2026-02-08 06:20:46.390663 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-08 06:20:46.390674 | orchestrator | 2026-02-08 06:20:46.390684 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-08 06:20:46.390705 | orchestrator | Sunday 08 February 2026 06:20:43 +0000 (0:00:00.270) 0:29:41.444 ******* 2026-02-08 06:20:46.390716 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:46.390727 | orchestrator | 2026-02-08 06:20:46.390738 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-08 06:20:46.390749 | orchestrator | Sunday 08 February 2026 06:20:43 +0000 (0:00:00.516) 0:29:41.960 ******* 2026-02-08 06:20:46.390760 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:46.390771 | orchestrator | 2026-02-08 06:20:46.390782 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:20:46.390793 | orchestrator | Sunday 08 February 2026 06:20:44 +0000 (0:00:00.165) 0:29:42.126 ******* 2026-02-08 06:20:46.390804 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:46.390815 | orchestrator | 2026-02-08 06:20:46.390825 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:20:46.390837 | orchestrator | Sunday 08 February 2026 06:20:44 +0000 (0:00:00.496) 0:29:42.622 ******* 2026-02-08 06:20:46.390848 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:46.390858 | orchestrator | 2026-02-08 06:20:46.390869 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-08 06:20:46.390880 | orchestrator | Sunday 08 February 2026 06:20:44 +0000 (0:00:00.153) 0:29:42.776 ******* 2026-02-08 06:20:46.390906 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:46.390927 | orchestrator | 2026-02-08 06:20:46.390938 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-08 06:20:46.390949 | orchestrator | Sunday 08 February 2026 06:20:44 +0000 (0:00:00.150) 0:29:42.926 ******* 2026-02-08 06:20:46.390960 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:46.390970 | orchestrator | 2026-02-08 06:20:46.390981 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-08 06:20:46.391032 | orchestrator | Sunday 08 February 2026 06:20:45 +0000 (0:00:00.475) 0:29:43.402 ******* 2026-02-08 06:20:46.391045 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:46.391056 | orchestrator | 2026-02-08 06:20:46.391067 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-08 06:20:46.391078 | orchestrator | Sunday 08 February 2026 06:20:45 +0000 (0:00:00.135) 0:29:43.537 ******* 2026-02-08 06:20:46.391089 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:46.391100 | orchestrator | 2026-02-08 06:20:46.391112 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-08 06:20:46.391123 | orchestrator | Sunday 08 February 2026 06:20:45 +0000 (0:00:00.141) 0:29:43.679 ******* 2026-02-08 06:20:46.391134 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:20:46.391145 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:20:46.391156 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:20:46.391167 | orchestrator | 2026-02-08 06:20:46.391178 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-08 06:20:46.391196 | orchestrator | Sunday 08 February 2026 06:20:46 +0000 (0:00:00.744) 0:29:44.424 ******* 2026-02-08 06:20:53.671793 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:53.671929 | orchestrator | 2026-02-08 06:20:53.671960 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-08 06:20:53.672236 | orchestrator | Sunday 08 February 2026 06:20:46 +0000 (0:00:00.263) 0:29:44.687 ******* 2026-02-08 06:20:53.672259 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:20:53.672280 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:20:53.672294 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:20:53.672307 | orchestrator | 2026-02-08 06:20:53.672321 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-08 06:20:53.672363 | orchestrator | Sunday 08 February 2026 06:20:48 +0000 (0:00:01.898) 0:29:46.586 ******* 2026-02-08 06:20:53.672379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 06:20:53.672393 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 06:20:53.672406 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 06:20:53.672416 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:53.672427 | orchestrator | 2026-02-08 06:20:53.672438 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-08 06:20:53.672449 | orchestrator | Sunday 08 February 2026 06:20:48 +0000 (0:00:00.421) 0:29:47.008 ******* 2026-02-08 06:20:53.672463 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-08 06:20:53.672478 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-08 06:20:53.672490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-08 06:20:53.672501 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:53.672512 | orchestrator | 2026-02-08 06:20:53.672523 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-08 06:20:53.672533 | orchestrator | Sunday 08 February 2026 06:20:49 +0000 (0:00:00.633) 0:29:47.641 ******* 2026-02-08 06:20:53.672546 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:53.672560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:53.672572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:53.672583 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:53.672594 | orchestrator | 2026-02-08 06:20:53.672604 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-08 06:20:53.672615 | orchestrator | Sunday 08 February 2026 06:20:49 +0000 (0:00:00.195) 0:29:47.837 ******* 2026-02-08 06:20:53.672651 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd0204e5b0336', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-08 06:20:47.182282', 'end': '2026-02-08 06:20:47.235084', 'delta': '0:00:00.052802', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d0204e5b0336'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-08 06:20:53.672675 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'c9ff2bec9773', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-08 06:20:47.754440', 'end': '2026-02-08 06:20:47.803420', 'delta': '0:00:00.048980', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c9ff2bec9773'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-08 06:20:53.672687 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '26deff989c40', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-08 06:20:48.324098', 'end': '2026-02-08 06:20:48.378761', 'delta': '0:00:00.054663', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['26deff989c40'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-08 06:20:53.672699 | orchestrator | 2026-02-08 06:20:53.672710 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-08 06:20:53.672721 | orchestrator | Sunday 08 February 2026 06:20:49 +0000 (0:00:00.202) 0:29:48.039 ******* 2026-02-08 06:20:53.672732 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:53.672743 | orchestrator | 2026-02-08 06:20:53.672754 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-08 06:20:53.672765 | orchestrator | Sunday 08 February 2026 06:20:50 +0000 (0:00:00.278) 0:29:48.318 ******* 2026-02-08 06:20:53.672776 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:53.672786 | orchestrator | 2026-02-08 06:20:53.672797 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-08 06:20:53.672808 | orchestrator | Sunday 08 February 2026 06:20:50 +0000 (0:00:00.260) 0:29:48.578 ******* 2026-02-08 06:20:53.672819 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:53.672829 | orchestrator | 2026-02-08 06:20:53.672840 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-08 06:20:53.672851 | orchestrator | Sunday 08 February 2026 06:20:50 +0000 (0:00:00.148) 0:29:48.727 ******* 2026-02-08 06:20:53.672862 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-08 06:20:53.672873 | orchestrator | 2026-02-08 06:20:53.672884 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:20:53.672895 | orchestrator | Sunday 08 February 2026 06:20:52 +0000 (0:00:01.426) 0:29:50.154 ******* 2026-02-08 06:20:53.672906 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:53.672916 | orchestrator | 2026-02-08 06:20:53.672927 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-08 06:20:53.672938 | orchestrator | Sunday 08 February 2026 06:20:52 +0000 (0:00:00.468) 0:29:50.622 ******* 2026-02-08 06:20:53.672948 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:53.672959 | orchestrator | 2026-02-08 06:20:53.672970 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-08 06:20:53.672981 | orchestrator | Sunday 08 February 2026 06:20:52 +0000 (0:00:00.122) 0:29:50.745 ******* 2026-02-08 06:20:53.672991 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:53.673038 | orchestrator | 2026-02-08 06:20:53.673058 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-08 06:20:53.673087 | orchestrator | Sunday 08 February 2026 06:20:52 +0000 (0:00:00.229) 0:29:50.975 ******* 2026-02-08 06:20:53.673103 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:53.673114 | orchestrator | 2026-02-08 06:20:53.673125 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-08 06:20:53.673135 | orchestrator | Sunday 08 February 2026 06:20:53 +0000 (0:00:00.131) 0:29:51.106 ******* 2026-02-08 06:20:53.673146 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:53.673157 | orchestrator | 2026-02-08 06:20:53.673168 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-08 06:20:53.673178 | orchestrator | Sunday 08 February 2026 06:20:53 +0000 (0:00:00.122) 0:29:51.229 ******* 2026-02-08 06:20:53.673189 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:53.673200 | orchestrator | 2026-02-08 06:20:53.673211 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-08 06:20:53.673222 | orchestrator | Sunday 08 February 2026 06:20:53 +0000 (0:00:00.186) 0:29:51.415 ******* 2026-02-08 06:20:53.673232 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:53.673243 | orchestrator | 2026-02-08 06:20:53.673254 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-08 06:20:53.673265 | orchestrator | Sunday 08 February 2026 06:20:53 +0000 (0:00:00.123) 0:29:51.539 ******* 2026-02-08 06:20:53.673275 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:53.673286 | orchestrator | 2026-02-08 06:20:53.673297 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-08 06:20:53.673317 | orchestrator | Sunday 08 February 2026 06:20:53 +0000 (0:00:00.174) 0:29:51.714 ******* 2026-02-08 06:20:54.244041 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:54.244146 | orchestrator | 2026-02-08 06:20:54.244164 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-08 06:20:54.244178 | orchestrator | Sunday 08 February 2026 06:20:53 +0000 (0:00:00.161) 0:29:51.876 ******* 2026-02-08 06:20:54.244190 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:54.244202 | orchestrator | 2026-02-08 06:20:54.244213 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-08 06:20:54.244224 | orchestrator | Sunday 08 February 2026 06:20:53 +0000 (0:00:00.171) 0:29:52.047 ******* 2026-02-08 06:20:54.244238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:20:54.244256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'uuids': ['1cd382bb-e631-457f-8e23-7bd2a48df188'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx']}})  2026-02-08 06:20:54.244271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '380fccde', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:20:54.244310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a']}})  2026-02-08 06:20:54.244324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:20:54.244337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:20:54.244370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-08 06:20:54.244383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:20:54.244396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM', 'dm-uuid-CRYPT-LUKS2-acd8aeb3a91948899ba0cb5b1d4bdfb1-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:20:54.244408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:20:54.244420 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'uuids': ['acd8aeb3-a919-4889-9ba0-cb5b1d4bdfb1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM']}})  2026-02-08 06:20:54.244443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307']}})  2026-02-08 06:20:54.244455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:20:54.244480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b6d2541', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-08 06:20:54.594191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:20:54.594295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-08 06:20:54.594312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx', 'dm-uuid-CRYPT-LUKS2-1cd382bbe631457f8e237bd2a48df188-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-08 06:20:54.594328 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:54.594341 | orchestrator | 2026-02-08 06:20:54.594354 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-08 06:20:54.594365 | orchestrator | Sunday 08 February 2026 06:20:54 +0000 (0:00:00.371) 0:29:52.419 ******* 2026-02-08 06:20:54.594378 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:54.594391 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307', 'dm-uuid-LVM-aSc4wxc22lxsBl3bsZYV01tD6GZC6hnhTrIfEw7ihzB97cb6a1fBLoGV7FMIqFCx'], 'uuids': ['1cd382bb-e631-457f-8e23-7bd2a48df188'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:54.594404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f', 'scsi-SQEMU_QEMU_HARDDISK_380fccde-fc16-4afd-8581-e221e230c62f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '380fccde', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:54.594459 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTZdUO-b7Sr-jZr2-VH77-tbu4-uLLV-wifIFG', 'scsi-0QEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02', 'scsi-SQEMU_QEMU_HARDDISK_fd096023-3e18-4205-a743-fc49c7d9ed02'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:54.594476 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:54.594488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:54.594501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-08-02-32-42-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:54.594513 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:54.594551 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM', 'dm-uuid-CRYPT-LUKS2-acd8aeb3a91948899ba0cb5b1d4bdfb1-UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:56.284501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:56.284591 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ad89cb8--326d--5a7d--8045--6e04c12be05a-osd--block--7ad89cb8--326d--5a7d--8045--6e04c12be05a', 'dm-uuid-LVM-rcT9E5PUfbc9LNMW28fCAUBBadlLkYxWUV2PuYXg8SGZaHizGQ3g7wDUilCwJsOM'], 'uuids': ['acd8aeb3-a919-4889-9ba0-cb5b1d4bdfb1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd096023', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['UV2PuY-Xg8S-GZaH-izGQ-3g7w-DUil-CwJsOM']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:56.284603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0tVmcc-zy2m-2uFM-aZrn-CnbJ-B058-J5TU15', 'scsi-0QEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a', 'scsi-SQEMU_QEMU_HARDDISK_88e353e1-d5f5-455b-9174-972f0fde258a'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '88e353e1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b3e05e81--e469--5668--9a53--5e8f92025307-osd--block--b3e05e81--e469--5668--9a53--5e8f92025307']}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:56.284615 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:56.284641 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '0b6d2541', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1', 'scsi-SQEMU_QEMU_HARDDISK_0b6d2541-fe07-44e8-aadf-a529695f9c1d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:56.284671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:56.284681 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:56.284690 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx', 'dm-uuid-CRYPT-LUKS2-1cd382bbe631457f8e237bd2a48df188-TrIfEw-7ihz-B97c-b6a1-fBLo-GV7F-MIqFCx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-08 06:20:56.284705 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:20:56.284715 | orchestrator | 2026-02-08 06:20:56.284726 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-08 06:20:56.284735 | orchestrator | Sunday 08 February 2026 06:20:54 +0000 (0:00:00.435) 0:29:52.854 ******* 2026-02-08 06:20:56.284743 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:56.284752 | orchestrator | 2026-02-08 06:20:56.284760 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-08 06:20:56.284768 | orchestrator | Sunday 08 February 2026 06:20:55 +0000 (0:00:00.483) 0:29:53.337 ******* 2026-02-08 06:20:56.284776 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:56.284784 | orchestrator | 2026-02-08 06:20:56.284792 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:20:56.284800 | orchestrator | Sunday 08 February 2026 06:20:55 +0000 (0:00:00.477) 0:29:53.815 ******* 2026-02-08 06:20:56.284808 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:20:56.284816 | orchestrator | 2026-02-08 06:20:56.284824 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:20:56.284837 | orchestrator | Sunday 08 February 2026 06:20:56 +0000 (0:00:00.510) 0:29:54.325 ******* 2026-02-08 06:21:11.258407 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.258519 | orchestrator | 2026-02-08 06:21:11.258537 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-08 06:21:11.258550 | orchestrator | Sunday 08 February 2026 06:20:56 +0000 (0:00:00.148) 0:29:54.474 ******* 2026-02-08 06:21:11.258562 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.258573 | orchestrator | 2026-02-08 06:21:11.258585 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-08 06:21:11.258596 | orchestrator | Sunday 08 February 2026 06:20:56 +0000 (0:00:00.248) 0:29:54.722 ******* 2026-02-08 06:21:11.258607 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.258618 | orchestrator | 2026-02-08 06:21:11.258629 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-08 06:21:11.258640 | orchestrator | Sunday 08 February 2026 06:20:56 +0000 (0:00:00.135) 0:29:54.858 ******* 2026-02-08 06:21:11.258652 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-08 06:21:11.258663 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-08 06:21:11.258674 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-08 06:21:11.258685 | orchestrator | 2026-02-08 06:21:11.258697 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-08 06:21:11.258708 | orchestrator | Sunday 08 February 2026 06:20:57 +0000 (0:00:00.689) 0:29:55.547 ******* 2026-02-08 06:21:11.258719 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-08 06:21:11.258730 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-08 06:21:11.258741 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-08 06:21:11.258752 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.258763 | orchestrator | 2026-02-08 06:21:11.258774 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-08 06:21:11.258785 | orchestrator | Sunday 08 February 2026 06:20:57 +0000 (0:00:00.169) 0:29:55.716 ******* 2026-02-08 06:21:11.258796 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-08 06:21:11.258808 | orchestrator | 2026-02-08 06:21:11.258820 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:21:11.258833 | orchestrator | Sunday 08 February 2026 06:20:57 +0000 (0:00:00.250) 0:29:55.967 ******* 2026-02-08 06:21:11.258844 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.258855 | orchestrator | 2026-02-08 06:21:11.258866 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:21:11.258877 | orchestrator | Sunday 08 February 2026 06:20:58 +0000 (0:00:00.166) 0:29:56.133 ******* 2026-02-08 06:21:11.258913 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.258925 | orchestrator | 2026-02-08 06:21:11.258936 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:21:11.258949 | orchestrator | Sunday 08 February 2026 06:20:58 +0000 (0:00:00.153) 0:29:56.287 ******* 2026-02-08 06:21:11.258967 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.258980 | orchestrator | 2026-02-08 06:21:11.258992 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:21:11.259028 | orchestrator | Sunday 08 February 2026 06:20:58 +0000 (0:00:00.153) 0:29:56.441 ******* 2026-02-08 06:21:11.259042 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:11.259055 | orchestrator | 2026-02-08 06:21:11.259067 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:21:11.259080 | orchestrator | Sunday 08 February 2026 06:20:58 +0000 (0:00:00.242) 0:29:56.683 ******* 2026-02-08 06:21:11.259093 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:21:11.259106 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:21:11.259118 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:21:11.259131 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.259144 | orchestrator | 2026-02-08 06:21:11.259157 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:21:11.259169 | orchestrator | Sunday 08 February 2026 06:20:59 +0000 (0:00:01.104) 0:29:57.788 ******* 2026-02-08 06:21:11.259182 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:21:11.259196 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:21:11.259208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:21:11.259220 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.259232 | orchestrator | 2026-02-08 06:21:11.259245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:21:11.259258 | orchestrator | Sunday 08 February 2026 06:21:00 +0000 (0:00:00.428) 0:29:58.216 ******* 2026-02-08 06:21:11.259272 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:21:11.259284 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:21:11.259297 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:21:11.259310 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.259321 | orchestrator | 2026-02-08 06:21:11.259332 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:21:11.259343 | orchestrator | Sunday 08 February 2026 06:21:00 +0000 (0:00:00.460) 0:29:58.676 ******* 2026-02-08 06:21:11.259353 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:11.259364 | orchestrator | 2026-02-08 06:21:11.259375 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:21:11.259386 | orchestrator | Sunday 08 February 2026 06:21:00 +0000 (0:00:00.167) 0:29:58.844 ******* 2026-02-08 06:21:11.259398 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 06:21:11.259409 | orchestrator | 2026-02-08 06:21:11.259420 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-08 06:21:11.259431 | orchestrator | Sunday 08 February 2026 06:21:01 +0000 (0:00:00.382) 0:29:59.226 ******* 2026-02-08 06:21:11.259459 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:21:11.259471 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:21:11.259482 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:21:11.259493 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:21:11.259504 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:21:11.259514 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-08 06:21:11.259525 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:21:11.259544 | orchestrator | 2026-02-08 06:21:11.259554 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-08 06:21:11.259565 | orchestrator | Sunday 08 February 2026 06:21:02 +0000 (0:00:00.879) 0:30:00.106 ******* 2026-02-08 06:21:11.259576 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-08 06:21:11.259587 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-08 06:21:11.259598 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-08 06:21:11.259609 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-08 06:21:11.259619 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-08 06:21:11.259630 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-08 06:21:11.259641 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-08 06:21:11.259652 | orchestrator | 2026-02-08 06:21:11.259663 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-08 06:21:11.259674 | orchestrator | Sunday 08 February 2026 06:21:03 +0000 (0:00:01.644) 0:30:01.750 ******* 2026-02-08 06:21:11.259685 | orchestrator | changed: [testbed-node-5] 2026-02-08 06:21:11.259695 | orchestrator | 2026-02-08 06:21:11.259706 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-08 06:21:11.259717 | orchestrator | Sunday 08 February 2026 06:21:05 +0000 (0:00:01.301) 0:30:03.052 ******* 2026-02-08 06:21:11.259728 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:21:11.259739 | orchestrator | 2026-02-08 06:21:11.259749 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-08 06:21:11.259760 | orchestrator | Sunday 08 February 2026 06:21:06 +0000 (0:00:01.983) 0:30:05.036 ******* 2026-02-08 06:21:11.259771 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:21:11.259782 | orchestrator | 2026-02-08 06:21:11.259793 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:21:11.259803 | orchestrator | Sunday 08 February 2026 06:21:08 +0000 (0:00:01.301) 0:30:06.337 ******* 2026-02-08 06:21:11.259814 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-08 06:21:11.259825 | orchestrator | 2026-02-08 06:21:11.259836 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:21:11.259847 | orchestrator | Sunday 08 February 2026 06:21:08 +0000 (0:00:00.208) 0:30:06.546 ******* 2026-02-08 06:21:11.259857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-08 06:21:11.259868 | orchestrator | 2026-02-08 06:21:11.259879 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:21:11.259890 | orchestrator | Sunday 08 February 2026 06:21:09 +0000 (0:00:00.547) 0:30:07.093 ******* 2026-02-08 06:21:11.259901 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.259912 | orchestrator | 2026-02-08 06:21:11.259923 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:21:11.259934 | orchestrator | Sunday 08 February 2026 06:21:09 +0000 (0:00:00.134) 0:30:07.228 ******* 2026-02-08 06:21:11.259945 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:11.259955 | orchestrator | 2026-02-08 06:21:11.259966 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:21:11.259977 | orchestrator | Sunday 08 February 2026 06:21:09 +0000 (0:00:00.526) 0:30:07.754 ******* 2026-02-08 06:21:11.259988 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:11.260017 | orchestrator | 2026-02-08 06:21:11.260028 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:21:11.260046 | orchestrator | Sunday 08 February 2026 06:21:10 +0000 (0:00:00.562) 0:30:08.317 ******* 2026-02-08 06:21:11.260057 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:11.260068 | orchestrator | 2026-02-08 06:21:11.260079 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:21:11.260090 | orchestrator | Sunday 08 February 2026 06:21:10 +0000 (0:00:00.571) 0:30:08.889 ******* 2026-02-08 06:21:11.260100 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.260111 | orchestrator | 2026-02-08 06:21:11.260122 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:21:11.260132 | orchestrator | Sunday 08 February 2026 06:21:10 +0000 (0:00:00.140) 0:30:09.029 ******* 2026-02-08 06:21:11.260143 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.260154 | orchestrator | 2026-02-08 06:21:11.260164 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:21:11.260175 | orchestrator | Sunday 08 February 2026 06:21:11 +0000 (0:00:00.124) 0:30:09.154 ******* 2026-02-08 06:21:11.260186 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:11.260197 | orchestrator | 2026-02-08 06:21:11.260208 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:21:11.260225 | orchestrator | Sunday 08 February 2026 06:21:11 +0000 (0:00:00.142) 0:30:09.297 ******* 2026-02-08 06:21:22.803276 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.803386 | orchestrator | 2026-02-08 06:21:22.803401 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:21:22.803413 | orchestrator | Sunday 08 February 2026 06:21:11 +0000 (0:00:00.554) 0:30:09.851 ******* 2026-02-08 06:21:22.803423 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.803433 | orchestrator | 2026-02-08 06:21:22.803444 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:21:22.803453 | orchestrator | Sunday 08 February 2026 06:21:12 +0000 (0:00:00.564) 0:30:10.415 ******* 2026-02-08 06:21:22.803463 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.803474 | orchestrator | 2026-02-08 06:21:22.803484 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:21:22.803494 | orchestrator | Sunday 08 February 2026 06:21:12 +0000 (0:00:00.152) 0:30:10.568 ******* 2026-02-08 06:21:22.803504 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.803514 | orchestrator | 2026-02-08 06:21:22.803525 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:21:22.803535 | orchestrator | Sunday 08 February 2026 06:21:12 +0000 (0:00:00.138) 0:30:10.707 ******* 2026-02-08 06:21:22.803545 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.803554 | orchestrator | 2026-02-08 06:21:22.803564 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:21:22.803574 | orchestrator | Sunday 08 February 2026 06:21:13 +0000 (0:00:00.470) 0:30:11.177 ******* 2026-02-08 06:21:22.803584 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.803594 | orchestrator | 2026-02-08 06:21:22.803603 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:21:22.803613 | orchestrator | Sunday 08 February 2026 06:21:13 +0000 (0:00:00.154) 0:30:11.331 ******* 2026-02-08 06:21:22.803623 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.803632 | orchestrator | 2026-02-08 06:21:22.803642 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:21:22.803652 | orchestrator | Sunday 08 February 2026 06:21:13 +0000 (0:00:00.154) 0:30:11.486 ******* 2026-02-08 06:21:22.803662 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.803672 | orchestrator | 2026-02-08 06:21:22.803682 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:21:22.803691 | orchestrator | Sunday 08 February 2026 06:21:13 +0000 (0:00:00.161) 0:30:11.647 ******* 2026-02-08 06:21:22.803701 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.803711 | orchestrator | 2026-02-08 06:21:22.803720 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:21:22.803751 | orchestrator | Sunday 08 February 2026 06:21:13 +0000 (0:00:00.129) 0:30:11.777 ******* 2026-02-08 06:21:22.803761 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.803771 | orchestrator | 2026-02-08 06:21:22.803780 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:21:22.803790 | orchestrator | Sunday 08 February 2026 06:21:13 +0000 (0:00:00.148) 0:30:11.925 ******* 2026-02-08 06:21:22.803801 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.803813 | orchestrator | 2026-02-08 06:21:22.803824 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:21:22.803916 | orchestrator | Sunday 08 February 2026 06:21:14 +0000 (0:00:00.167) 0:30:12.092 ******* 2026-02-08 06:21:22.803927 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.803938 | orchestrator | 2026-02-08 06:21:22.803950 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-08 06:21:22.803961 | orchestrator | Sunday 08 February 2026 06:21:14 +0000 (0:00:00.243) 0:30:12.336 ******* 2026-02-08 06:21:22.803972 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.803983 | orchestrator | 2026-02-08 06:21:22.803995 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-08 06:21:22.804032 | orchestrator | Sunday 08 February 2026 06:21:14 +0000 (0:00:00.192) 0:30:12.529 ******* 2026-02-08 06:21:22.804044 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804055 | orchestrator | 2026-02-08 06:21:22.804066 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-08 06:21:22.804077 | orchestrator | Sunday 08 February 2026 06:21:14 +0000 (0:00:00.133) 0:30:12.662 ******* 2026-02-08 06:21:22.804088 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804100 | orchestrator | 2026-02-08 06:21:22.804111 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-08 06:21:22.804122 | orchestrator | Sunday 08 February 2026 06:21:14 +0000 (0:00:00.136) 0:30:12.798 ******* 2026-02-08 06:21:22.804133 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804144 | orchestrator | 2026-02-08 06:21:22.804155 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-08 06:21:22.804165 | orchestrator | Sunday 08 February 2026 06:21:14 +0000 (0:00:00.136) 0:30:12.935 ******* 2026-02-08 06:21:22.804174 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804184 | orchestrator | 2026-02-08 06:21:22.804194 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-08 06:21:22.804210 | orchestrator | Sunday 08 February 2026 06:21:15 +0000 (0:00:00.137) 0:30:13.072 ******* 2026-02-08 06:21:22.804227 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804243 | orchestrator | 2026-02-08 06:21:22.804259 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-08 06:21:22.804276 | orchestrator | Sunday 08 February 2026 06:21:15 +0000 (0:00:00.494) 0:30:13.567 ******* 2026-02-08 06:21:22.804292 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804303 | orchestrator | 2026-02-08 06:21:22.804313 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-08 06:21:22.804323 | orchestrator | Sunday 08 February 2026 06:21:15 +0000 (0:00:00.134) 0:30:13.701 ******* 2026-02-08 06:21:22.804333 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804342 | orchestrator | 2026-02-08 06:21:22.804352 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-08 06:21:22.804361 | orchestrator | Sunday 08 February 2026 06:21:15 +0000 (0:00:00.152) 0:30:13.854 ******* 2026-02-08 06:21:22.804371 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804380 | orchestrator | 2026-02-08 06:21:22.804408 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-08 06:21:22.804418 | orchestrator | Sunday 08 February 2026 06:21:15 +0000 (0:00:00.128) 0:30:13.983 ******* 2026-02-08 06:21:22.804428 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804437 | orchestrator | 2026-02-08 06:21:22.804447 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-08 06:21:22.804467 | orchestrator | Sunday 08 February 2026 06:21:16 +0000 (0:00:00.137) 0:30:14.121 ******* 2026-02-08 06:21:22.804477 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804486 | orchestrator | 2026-02-08 06:21:22.804496 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-08 06:21:22.804505 | orchestrator | Sunday 08 February 2026 06:21:16 +0000 (0:00:00.137) 0:30:14.258 ******* 2026-02-08 06:21:22.804515 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804524 | orchestrator | 2026-02-08 06:21:22.804534 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-08 06:21:22.804543 | orchestrator | Sunday 08 February 2026 06:21:16 +0000 (0:00:00.211) 0:30:14.470 ******* 2026-02-08 06:21:22.804553 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.804563 | orchestrator | 2026-02-08 06:21:22.804573 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-08 06:21:22.804582 | orchestrator | Sunday 08 February 2026 06:21:17 +0000 (0:00:00.901) 0:30:15.371 ******* 2026-02-08 06:21:22.804592 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.804601 | orchestrator | 2026-02-08 06:21:22.804611 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-08 06:21:22.804621 | orchestrator | Sunday 08 February 2026 06:21:18 +0000 (0:00:01.276) 0:30:16.648 ******* 2026-02-08 06:21:22.804630 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-08 06:21:22.804641 | orchestrator | 2026-02-08 06:21:22.804650 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-08 06:21:22.804660 | orchestrator | Sunday 08 February 2026 06:21:18 +0000 (0:00:00.232) 0:30:16.881 ******* 2026-02-08 06:21:22.804670 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804680 | orchestrator | 2026-02-08 06:21:22.804689 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-08 06:21:22.804699 | orchestrator | Sunday 08 February 2026 06:21:18 +0000 (0:00:00.139) 0:30:17.021 ******* 2026-02-08 06:21:22.804708 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804718 | orchestrator | 2026-02-08 06:21:22.804727 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-08 06:21:22.804737 | orchestrator | Sunday 08 February 2026 06:21:19 +0000 (0:00:00.473) 0:30:17.494 ******* 2026-02-08 06:21:22.804746 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-08 06:21:22.804756 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-08 06:21:22.804766 | orchestrator | 2026-02-08 06:21:22.804776 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-08 06:21:22.804785 | orchestrator | Sunday 08 February 2026 06:21:20 +0000 (0:00:00.856) 0:30:18.350 ******* 2026-02-08 06:21:22.804795 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.804804 | orchestrator | 2026-02-08 06:21:22.804814 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-08 06:21:22.804823 | orchestrator | Sunday 08 February 2026 06:21:20 +0000 (0:00:00.492) 0:30:18.843 ******* 2026-02-08 06:21:22.804833 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804842 | orchestrator | 2026-02-08 06:21:22.804852 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-08 06:21:22.804861 | orchestrator | Sunday 08 February 2026 06:21:20 +0000 (0:00:00.162) 0:30:19.005 ******* 2026-02-08 06:21:22.804871 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804880 | orchestrator | 2026-02-08 06:21:22.804890 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-08 06:21:22.804900 | orchestrator | Sunday 08 February 2026 06:21:21 +0000 (0:00:00.184) 0:30:19.190 ******* 2026-02-08 06:21:22.804909 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.804919 | orchestrator | 2026-02-08 06:21:22.804928 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-08 06:21:22.804938 | orchestrator | Sunday 08 February 2026 06:21:21 +0000 (0:00:00.153) 0:30:19.344 ******* 2026-02-08 06:21:22.804955 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-08 06:21:22.804964 | orchestrator | 2026-02-08 06:21:22.804974 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-08 06:21:22.804984 | orchestrator | Sunday 08 February 2026 06:21:21 +0000 (0:00:00.244) 0:30:19.589 ******* 2026-02-08 06:21:22.804993 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:22.805035 | orchestrator | 2026-02-08 06:21:22.805046 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-08 06:21:22.805056 | orchestrator | Sunday 08 February 2026 06:21:22 +0000 (0:00:00.725) 0:30:20.314 ******* 2026-02-08 06:21:22.805066 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-08 06:21:22.805075 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-08 06:21:22.805085 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-08 06:21:22.805094 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.805104 | orchestrator | 2026-02-08 06:21:22.805113 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-08 06:21:22.805123 | orchestrator | Sunday 08 February 2026 06:21:22 +0000 (0:00:00.146) 0:30:20.460 ******* 2026-02-08 06:21:22.805132 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.805142 | orchestrator | 2026-02-08 06:21:22.805151 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-08 06:21:22.805161 | orchestrator | Sunday 08 February 2026 06:21:22 +0000 (0:00:00.136) 0:30:20.597 ******* 2026-02-08 06:21:22.805170 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:22.805180 | orchestrator | 2026-02-08 06:21:22.805196 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-08 06:21:40.759997 | orchestrator | Sunday 08 February 2026 06:21:22 +0000 (0:00:00.241) 0:30:20.839 ******* 2026-02-08 06:21:40.760133 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760144 | orchestrator | 2026-02-08 06:21:40.760152 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-08 06:21:40.760159 | orchestrator | Sunday 08 February 2026 06:21:22 +0000 (0:00:00.158) 0:30:20.997 ******* 2026-02-08 06:21:40.760227 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760237 | orchestrator | 2026-02-08 06:21:40.760243 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-08 06:21:40.760251 | orchestrator | Sunday 08 February 2026 06:21:23 +0000 (0:00:00.485) 0:30:21.483 ******* 2026-02-08 06:21:40.760257 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760264 | orchestrator | 2026-02-08 06:21:40.760271 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-08 06:21:40.760278 | orchestrator | Sunday 08 February 2026 06:21:23 +0000 (0:00:00.161) 0:30:21.644 ******* 2026-02-08 06:21:40.760285 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:40.760292 | orchestrator | 2026-02-08 06:21:40.760299 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-08 06:21:40.760306 | orchestrator | Sunday 08 February 2026 06:21:25 +0000 (0:00:01.620) 0:30:23.264 ******* 2026-02-08 06:21:40.760313 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:40.760319 | orchestrator | 2026-02-08 06:21:40.760326 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-08 06:21:40.760332 | orchestrator | Sunday 08 February 2026 06:21:25 +0000 (0:00:00.139) 0:30:23.403 ******* 2026-02-08 06:21:40.760339 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-08 06:21:40.760346 | orchestrator | 2026-02-08 06:21:40.760352 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-08 06:21:40.760358 | orchestrator | Sunday 08 February 2026 06:21:25 +0000 (0:00:00.250) 0:30:23.653 ******* 2026-02-08 06:21:40.760365 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760371 | orchestrator | 2026-02-08 06:21:40.760399 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-08 06:21:40.760404 | orchestrator | Sunday 08 February 2026 06:21:25 +0000 (0:00:00.148) 0:30:23.802 ******* 2026-02-08 06:21:40.760408 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760411 | orchestrator | 2026-02-08 06:21:40.760415 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-08 06:21:40.760419 | orchestrator | Sunday 08 February 2026 06:21:25 +0000 (0:00:00.154) 0:30:23.957 ******* 2026-02-08 06:21:40.760423 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760427 | orchestrator | 2026-02-08 06:21:40.760431 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-08 06:21:40.760435 | orchestrator | Sunday 08 February 2026 06:21:26 +0000 (0:00:00.169) 0:30:24.126 ******* 2026-02-08 06:21:40.760438 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760442 | orchestrator | 2026-02-08 06:21:40.760446 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-08 06:21:40.760450 | orchestrator | Sunday 08 February 2026 06:21:26 +0000 (0:00:00.153) 0:30:24.280 ******* 2026-02-08 06:21:40.760453 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760457 | orchestrator | 2026-02-08 06:21:40.760461 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-08 06:21:40.760465 | orchestrator | Sunday 08 February 2026 06:21:26 +0000 (0:00:00.162) 0:30:24.443 ******* 2026-02-08 06:21:40.760468 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760472 | orchestrator | 2026-02-08 06:21:40.760476 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-08 06:21:40.760479 | orchestrator | Sunday 08 February 2026 06:21:26 +0000 (0:00:00.151) 0:30:24.595 ******* 2026-02-08 06:21:40.760483 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760487 | orchestrator | 2026-02-08 06:21:40.760491 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-08 06:21:40.760494 | orchestrator | Sunday 08 February 2026 06:21:26 +0000 (0:00:00.170) 0:30:24.765 ******* 2026-02-08 06:21:40.760498 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760502 | orchestrator | 2026-02-08 06:21:40.760505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-08 06:21:40.760509 | orchestrator | Sunday 08 February 2026 06:21:27 +0000 (0:00:00.499) 0:30:25.265 ******* 2026-02-08 06:21:40.760513 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:21:40.760517 | orchestrator | 2026-02-08 06:21:40.760520 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-08 06:21:40.760524 | orchestrator | Sunday 08 February 2026 06:21:27 +0000 (0:00:00.231) 0:30:25.496 ******* 2026-02-08 06:21:40.760528 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-08 06:21:40.760533 | orchestrator | 2026-02-08 06:21:40.760537 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-08 06:21:40.760540 | orchestrator | Sunday 08 February 2026 06:21:27 +0000 (0:00:00.205) 0:30:25.701 ******* 2026-02-08 06:21:40.760544 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-08 06:21:40.760548 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-08 06:21:40.760552 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-08 06:21:40.760556 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-08 06:21:40.760560 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-08 06:21:40.760563 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-08 06:21:40.760567 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-08 06:21:40.760571 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-08 06:21:40.760576 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-08 06:21:40.760580 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-08 06:21:40.760583 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-08 06:21:40.760604 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-08 06:21:40.760608 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-08 06:21:40.760612 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-08 06:21:40.760615 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-08 06:21:40.760619 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-08 06:21:40.760623 | orchestrator | 2026-02-08 06:21:40.760627 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-08 06:21:40.760631 | orchestrator | Sunday 08 February 2026 06:21:33 +0000 (0:00:05.652) 0:30:31.354 ******* 2026-02-08 06:21:40.760635 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-08 06:21:40.760638 | orchestrator | 2026-02-08 06:21:40.760642 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-08 06:21:40.760646 | orchestrator | Sunday 08 February 2026 06:21:33 +0000 (0:00:00.222) 0:30:31.577 ******* 2026-02-08 06:21:40.760650 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:21:40.760654 | orchestrator | 2026-02-08 06:21:40.760658 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-08 06:21:40.760662 | orchestrator | Sunday 08 February 2026 06:21:34 +0000 (0:00:00.505) 0:30:32.082 ******* 2026-02-08 06:21:40.760665 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:21:40.760669 | orchestrator | 2026-02-08 06:21:40.760673 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-08 06:21:40.760677 | orchestrator | Sunday 08 February 2026 06:21:35 +0000 (0:00:00.992) 0:30:33.074 ******* 2026-02-08 06:21:40.760681 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760684 | orchestrator | 2026-02-08 06:21:40.760688 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-08 06:21:40.760692 | orchestrator | Sunday 08 February 2026 06:21:35 +0000 (0:00:00.139) 0:30:33.213 ******* 2026-02-08 06:21:40.760696 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760700 | orchestrator | 2026-02-08 06:21:40.760703 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-08 06:21:40.760707 | orchestrator | Sunday 08 February 2026 06:21:35 +0000 (0:00:00.156) 0:30:33.370 ******* 2026-02-08 06:21:40.760711 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760715 | orchestrator | 2026-02-08 06:21:40.760718 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-08 06:21:40.760722 | orchestrator | Sunday 08 February 2026 06:21:35 +0000 (0:00:00.165) 0:30:33.535 ******* 2026-02-08 06:21:40.760726 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760730 | orchestrator | 2026-02-08 06:21:40.760733 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-08 06:21:40.760739 | orchestrator | Sunday 08 February 2026 06:21:35 +0000 (0:00:00.431) 0:30:33.967 ******* 2026-02-08 06:21:40.760744 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760750 | orchestrator | 2026-02-08 06:21:40.760756 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-08 06:21:40.760762 | orchestrator | Sunday 08 February 2026 06:21:36 +0000 (0:00:00.135) 0:30:34.102 ******* 2026-02-08 06:21:40.760768 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760774 | orchestrator | 2026-02-08 06:21:40.760780 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-08 06:21:40.760786 | orchestrator | Sunday 08 February 2026 06:21:36 +0000 (0:00:00.134) 0:30:34.237 ******* 2026-02-08 06:21:40.760792 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760799 | orchestrator | 2026-02-08 06:21:40.760804 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-08 06:21:40.760812 | orchestrator | Sunday 08 February 2026 06:21:36 +0000 (0:00:00.159) 0:30:34.397 ******* 2026-02-08 06:21:40.760816 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760820 | orchestrator | 2026-02-08 06:21:40.760824 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-08 06:21:40.760828 | orchestrator | Sunday 08 February 2026 06:21:36 +0000 (0:00:00.154) 0:30:34.552 ******* 2026-02-08 06:21:40.760832 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760836 | orchestrator | 2026-02-08 06:21:40.760840 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-08 06:21:40.760844 | orchestrator | Sunday 08 February 2026 06:21:36 +0000 (0:00:00.142) 0:30:34.694 ******* 2026-02-08 06:21:40.760847 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760851 | orchestrator | 2026-02-08 06:21:40.760855 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-08 06:21:40.760859 | orchestrator | Sunday 08 February 2026 06:21:36 +0000 (0:00:00.142) 0:30:34.837 ******* 2026-02-08 06:21:40.760862 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:21:40.760866 | orchestrator | 2026-02-08 06:21:40.760870 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-08 06:21:40.760873 | orchestrator | Sunday 08 February 2026 06:21:36 +0000 (0:00:00.147) 0:30:34.984 ******* 2026-02-08 06:21:40.760877 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-08 06:21:40.760881 | orchestrator | 2026-02-08 06:21:40.760885 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-08 06:21:40.760888 | orchestrator | Sunday 08 February 2026 06:21:40 +0000 (0:00:03.603) 0:30:38.588 ******* 2026-02-08 06:21:40.760892 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:21:40.760896 | orchestrator | 2026-02-08 06:21:40.760902 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-08 06:22:03.475550 | orchestrator | Sunday 08 February 2026 06:21:40 +0000 (0:00:00.210) 0:30:38.798 ******* 2026-02-08 06:22:03.475665 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-08 06:22:03.475686 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-08 06:22:03.475700 | orchestrator | 2026-02-08 06:22:03.475714 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-08 06:22:03.475725 | orchestrator | Sunday 08 February 2026 06:21:44 +0000 (0:00:03.836) 0:30:42.635 ******* 2026-02-08 06:22:03.475737 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.475749 | orchestrator | 2026-02-08 06:22:03.475761 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-08 06:22:03.475772 | orchestrator | Sunday 08 February 2026 06:21:44 +0000 (0:00:00.134) 0:30:42.770 ******* 2026-02-08 06:22:03.475784 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.475796 | orchestrator | 2026-02-08 06:22:03.475808 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-08 06:22:03.475820 | orchestrator | Sunday 08 February 2026 06:21:44 +0000 (0:00:00.146) 0:30:42.917 ******* 2026-02-08 06:22:03.475832 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.475842 | orchestrator | 2026-02-08 06:22:03.475854 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-08 06:22:03.475888 | orchestrator | Sunday 08 February 2026 06:21:45 +0000 (0:00:00.157) 0:30:43.074 ******* 2026-02-08 06:22:03.475899 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.475911 | orchestrator | 2026-02-08 06:22:03.475922 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-08 06:22:03.475933 | orchestrator | Sunday 08 February 2026 06:21:45 +0000 (0:00:00.503) 0:30:43.578 ******* 2026-02-08 06:22:03.475943 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.475954 | orchestrator | 2026-02-08 06:22:03.475965 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-08 06:22:03.475976 | orchestrator | Sunday 08 February 2026 06:21:45 +0000 (0:00:00.166) 0:30:43.745 ******* 2026-02-08 06:22:03.475987 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:03.476000 | orchestrator | 2026-02-08 06:22:03.476065 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-08 06:22:03.476077 | orchestrator | Sunday 08 February 2026 06:21:45 +0000 (0:00:00.263) 0:30:44.008 ******* 2026-02-08 06:22:03.476092 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:22:03.476107 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:22:03.476120 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:22:03.476133 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.476146 | orchestrator | 2026-02-08 06:22:03.476159 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-08 06:22:03.476172 | orchestrator | Sunday 08 February 2026 06:21:46 +0000 (0:00:00.494) 0:30:44.503 ******* 2026-02-08 06:22:03.476185 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:22:03.476199 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:22:03.476211 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:22:03.476224 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.476237 | orchestrator | 2026-02-08 06:22:03.476251 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-08 06:22:03.476265 | orchestrator | Sunday 08 February 2026 06:21:46 +0000 (0:00:00.434) 0:30:44.937 ******* 2026-02-08 06:22:03.476279 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-08 06:22:03.476292 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-08 06:22:03.476305 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-08 06:22:03.476318 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.476330 | orchestrator | 2026-02-08 06:22:03.476345 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-08 06:22:03.476358 | orchestrator | Sunday 08 February 2026 06:21:47 +0000 (0:00:00.416) 0:30:45.354 ******* 2026-02-08 06:22:03.476371 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:03.476385 | orchestrator | 2026-02-08 06:22:03.476398 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-08 06:22:03.476409 | orchestrator | Sunday 08 February 2026 06:21:47 +0000 (0:00:00.178) 0:30:45.533 ******* 2026-02-08 06:22:03.476420 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-08 06:22:03.476431 | orchestrator | 2026-02-08 06:22:03.476442 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-08 06:22:03.476454 | orchestrator | Sunday 08 February 2026 06:21:47 +0000 (0:00:00.471) 0:30:46.005 ******* 2026-02-08 06:22:03.476465 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:03.476476 | orchestrator | 2026-02-08 06:22:03.476486 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-08 06:22:03.476497 | orchestrator | Sunday 08 February 2026 06:21:48 +0000 (0:00:00.821) 0:30:46.826 ******* 2026-02-08 06:22:03.476509 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-08 06:22:03.476520 | orchestrator | 2026-02-08 06:22:03.476549 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-08 06:22:03.476561 | orchestrator | Sunday 08 February 2026 06:21:48 +0000 (0:00:00.197) 0:30:47.024 ******* 2026-02-08 06:22:03.476581 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:22:03.476592 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-08 06:22:03.476603 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:22:03.476614 | orchestrator | 2026-02-08 06:22:03.476625 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:22:03.476636 | orchestrator | Sunday 08 February 2026 06:21:52 +0000 (0:00:03.047) 0:30:50.071 ******* 2026-02-08 06:22:03.476647 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-08 06:22:03.476658 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-08 06:22:03.476669 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:03.476680 | orchestrator | 2026-02-08 06:22:03.476692 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-08 06:22:03.476703 | orchestrator | Sunday 08 February 2026 06:21:53 +0000 (0:00:00.989) 0:30:51.061 ******* 2026-02-08 06:22:03.476714 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.476725 | orchestrator | 2026-02-08 06:22:03.476736 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-08 06:22:03.476747 | orchestrator | Sunday 08 February 2026 06:21:53 +0000 (0:00:00.141) 0:30:51.203 ******* 2026-02-08 06:22:03.476758 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-08 06:22:03.476770 | orchestrator | 2026-02-08 06:22:03.476781 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-08 06:22:03.476792 | orchestrator | Sunday 08 February 2026 06:21:53 +0000 (0:00:00.209) 0:30:51.412 ******* 2026-02-08 06:22:03.476804 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:22:03.476816 | orchestrator | 2026-02-08 06:22:03.476827 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-08 06:22:03.476838 | orchestrator | Sunday 08 February 2026 06:21:53 +0000 (0:00:00.616) 0:30:52.029 ******* 2026-02-08 06:22:03.476849 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:22:03.476860 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-08 06:22:03.476871 | orchestrator | 2026-02-08 06:22:03.476883 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-08 06:22:03.476894 | orchestrator | Sunday 08 February 2026 06:21:58 +0000 (0:00:04.101) 0:30:56.130 ******* 2026-02-08 06:22:03.476905 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-08 06:22:03.476916 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-08 06:22:03.476927 | orchestrator | 2026-02-08 06:22:03.476938 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-08 06:22:03.476949 | orchestrator | Sunday 08 February 2026 06:22:00 +0000 (0:00:02.159) 0:30:58.290 ******* 2026-02-08 06:22:03.476960 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-08 06:22:03.476971 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:03.476982 | orchestrator | 2026-02-08 06:22:03.476993 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-08 06:22:03.477004 | orchestrator | Sunday 08 February 2026 06:22:01 +0000 (0:00:01.042) 0:30:59.332 ******* 2026-02-08 06:22:03.477044 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-08 06:22:03.477056 | orchestrator | 2026-02-08 06:22:03.477067 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-08 06:22:03.477078 | orchestrator | Sunday 08 February 2026 06:22:01 +0000 (0:00:00.252) 0:30:59.585 ******* 2026-02-08 06:22:03.477089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:03.477109 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:03.477121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:03.477132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:03.477143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:03.477154 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:03.477165 | orchestrator | 2026-02-08 06:22:03.477176 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-08 06:22:03.477187 | orchestrator | Sunday 08 February 2026 06:22:02 +0000 (0:00:00.987) 0:31:00.573 ******* 2026-02-08 06:22:03.477198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:03.477209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:03.477220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:03.477238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:59.622320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-08 06:22:59.622435 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:59.622453 | orchestrator | 2026-02-08 06:22:59.622467 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-08 06:22:59.622480 | orchestrator | Sunday 08 February 2026 06:22:03 +0000 (0:00:00.938) 0:31:01.511 ******* 2026-02-08 06:22:59.622492 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:22:59.622505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:22:59.622516 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:22:59.622527 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:22:59.622540 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-08 06:22:59.622551 | orchestrator | 2026-02-08 06:22:59.622562 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-08 06:22:59.622574 | orchestrator | Sunday 08 February 2026 06:22:35 +0000 (0:00:32.025) 0:31:33.536 ******* 2026-02-08 06:22:59.622585 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:59.622596 | orchestrator | 2026-02-08 06:22:59.622607 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-08 06:22:59.622618 | orchestrator | Sunday 08 February 2026 06:22:35 +0000 (0:00:00.144) 0:31:33.681 ******* 2026-02-08 06:22:59.622629 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:59.622640 | orchestrator | 2026-02-08 06:22:59.622651 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-08 06:22:59.622662 | orchestrator | Sunday 08 February 2026 06:22:35 +0000 (0:00:00.140) 0:31:33.821 ******* 2026-02-08 06:22:59.622674 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-08 06:22:59.622716 | orchestrator | 2026-02-08 06:22:59.622731 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-08 06:22:59.622742 | orchestrator | Sunday 08 February 2026 06:22:35 +0000 (0:00:00.209) 0:31:34.031 ******* 2026-02-08 06:22:59.622753 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-08 06:22:59.622763 | orchestrator | 2026-02-08 06:22:59.622774 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-08 06:22:59.622785 | orchestrator | Sunday 08 February 2026 06:22:36 +0000 (0:00:00.222) 0:31:34.253 ******* 2026-02-08 06:22:59.622796 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:59.622808 | orchestrator | 2026-02-08 06:22:59.622819 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-08 06:22:59.622829 | orchestrator | Sunday 08 February 2026 06:22:37 +0000 (0:00:01.080) 0:31:35.333 ******* 2026-02-08 06:22:59.622840 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:59.622851 | orchestrator | 2026-02-08 06:22:59.622862 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-08 06:22:59.622875 | orchestrator | Sunday 08 February 2026 06:22:38 +0000 (0:00:00.927) 0:31:36.261 ******* 2026-02-08 06:22:59.622888 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:59.622901 | orchestrator | 2026-02-08 06:22:59.622913 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-08 06:22:59.622926 | orchestrator | Sunday 08 February 2026 06:22:39 +0000 (0:00:01.212) 0:31:37.473 ******* 2026-02-08 06:22:59.622939 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-08 06:22:59.622952 | orchestrator | 2026-02-08 06:22:59.622964 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-08 06:22:59.622977 | orchestrator | skipping: no hosts matched 2026-02-08 06:22:59.622989 | orchestrator | 2026-02-08 06:22:59.623002 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-08 06:22:59.623015 | orchestrator | skipping: no hosts matched 2026-02-08 06:22:59.623208 | orchestrator | 2026-02-08 06:22:59.623228 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-08 06:22:59.623239 | orchestrator | skipping: no hosts matched 2026-02-08 06:22:59.623250 | orchestrator | 2026-02-08 06:22:59.623260 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-08 06:22:59.623272 | orchestrator | 2026-02-08 06:22:59.623283 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-08 06:22:59.623294 | orchestrator | Sunday 08 February 2026 06:22:42 +0000 (0:00:03.466) 0:31:40.940 ******* 2026-02-08 06:22:59.623305 | orchestrator | changed: [testbed-node-0] 2026-02-08 06:22:59.623317 | orchestrator | changed: [testbed-node-1] 2026-02-08 06:22:59.623327 | orchestrator | changed: [testbed-node-2] 2026-02-08 06:22:59.623339 | orchestrator | changed: [testbed-node-3] 2026-02-08 06:22:59.623349 | orchestrator | changed: [testbed-node-4] 2026-02-08 06:22:59.623360 | orchestrator | changed: [testbed-node-5] 2026-02-08 06:22:59.623371 | orchestrator | 2026-02-08 06:22:59.623382 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-08 06:22:59.623393 | orchestrator | Sunday 08 February 2026 06:22:44 +0000 (0:00:01.886) 0:31:42.826 ******* 2026-02-08 06:22:59.623404 | orchestrator | changed: [testbed-node-3] 2026-02-08 06:22:59.623415 | orchestrator | changed: [testbed-node-0] 2026-02-08 06:22:59.623426 | orchestrator | changed: [testbed-node-1] 2026-02-08 06:22:59.623436 | orchestrator | changed: [testbed-node-2] 2026-02-08 06:22:59.623468 | orchestrator | changed: [testbed-node-4] 2026-02-08 06:22:59.623480 | orchestrator | changed: [testbed-node-5] 2026-02-08 06:22:59.623491 | orchestrator | 2026-02-08 06:22:59.623502 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:22:59.623513 | orchestrator | Sunday 08 February 2026 06:22:47 +0000 (0:00:02.460) 0:31:45.287 ******* 2026-02-08 06:22:59.623523 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:22:59.623547 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:22:59.623558 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:22:59.623569 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:22:59.623580 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:22:59.623590 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:59.623601 | orchestrator | 2026-02-08 06:22:59.623611 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:22:59.623622 | orchestrator | Sunday 08 February 2026 06:22:48 +0000 (0:00:01.039) 0:31:46.327 ******* 2026-02-08 06:22:59.623633 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:22:59.623644 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:22:59.623654 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:22:59.623665 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:22:59.623676 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:22:59.623686 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:59.623697 | orchestrator | 2026-02-08 06:22:59.623708 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-08 06:22:59.623719 | orchestrator | Sunday 08 February 2026 06:22:49 +0000 (0:00:01.447) 0:31:47.774 ******* 2026-02-08 06:22:59.623731 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 06:22:59.623743 | orchestrator | 2026-02-08 06:22:59.623754 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-08 06:22:59.623765 | orchestrator | Sunday 08 February 2026 06:22:51 +0000 (0:00:01.422) 0:31:49.197 ******* 2026-02-08 06:22:59.623776 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 06:22:59.623787 | orchestrator | 2026-02-08 06:22:59.623798 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-08 06:22:59.623808 | orchestrator | Sunday 08 February 2026 06:22:52 +0000 (0:00:01.399) 0:31:50.596 ******* 2026-02-08 06:22:59.623819 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:22:59.623830 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:22:59.623841 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:22:59.623852 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:59.623862 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:22:59.623873 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:22:59.623884 | orchestrator | 2026-02-08 06:22:59.623895 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-08 06:22:59.623906 | orchestrator | Sunday 08 February 2026 06:22:53 +0000 (0:00:00.743) 0:31:51.340 ******* 2026-02-08 06:22:59.623916 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:22:59.623927 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:22:59.623938 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:22:59.623949 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:22:59.623959 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:22:59.623970 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:59.623981 | orchestrator | 2026-02-08 06:22:59.623992 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-08 06:22:59.624003 | orchestrator | Sunday 08 February 2026 06:22:54 +0000 (0:00:01.367) 0:31:52.708 ******* 2026-02-08 06:22:59.624014 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:22:59.624053 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:22:59.624064 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:22:59.624075 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:22:59.624086 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:22:59.624097 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:59.624108 | orchestrator | 2026-02-08 06:22:59.624119 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-08 06:22:59.624130 | orchestrator | Sunday 08 February 2026 06:22:55 +0000 (0:00:01.078) 0:31:53.786 ******* 2026-02-08 06:22:59.624141 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:22:59.624159 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:22:59.624171 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:22:59.624182 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:22:59.624193 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:22:59.624203 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:22:59.624214 | orchestrator | 2026-02-08 06:22:59.624225 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-08 06:22:59.624237 | orchestrator | Sunday 08 February 2026 06:22:57 +0000 (0:00:01.411) 0:31:55.197 ******* 2026-02-08 06:22:59.624248 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:22:59.624258 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:22:59.624269 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:22:59.624280 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:22:59.624291 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:59.624302 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:22:59.624313 | orchestrator | 2026-02-08 06:22:59.624325 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-08 06:22:59.624336 | orchestrator | Sunday 08 February 2026 06:22:57 +0000 (0:00:00.786) 0:31:55.984 ******* 2026-02-08 06:22:59.624346 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:22:59.624357 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:22:59.624368 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:22:59.624379 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:22:59.624390 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:22:59.624401 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:22:59.624412 | orchestrator | 2026-02-08 06:22:59.624423 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-08 06:22:59.624434 | orchestrator | Sunday 08 February 2026 06:22:58 +0000 (0:00:01.002) 0:31:56.986 ******* 2026-02-08 06:22:59.624445 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:22:59.624456 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:22:59.624467 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:22:59.624478 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:22:59.624489 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:22:59.624506 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:30.353732 | orchestrator | 2026-02-08 06:23:30.353831 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-08 06:23:30.353844 | orchestrator | Sunday 08 February 2026 06:22:59 +0000 (0:00:00.671) 0:31:57.658 ******* 2026-02-08 06:23:30.353853 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.353862 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.353871 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.353879 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.353887 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.353895 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.353903 | orchestrator | 2026-02-08 06:23:30.353912 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-08 06:23:30.353921 | orchestrator | Sunday 08 February 2026 06:23:01 +0000 (0:00:01.398) 0:31:59.057 ******* 2026-02-08 06:23:30.353929 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.353937 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.353945 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.353953 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.353961 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.353969 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.353977 | orchestrator | 2026-02-08 06:23:30.353985 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-08 06:23:30.353993 | orchestrator | Sunday 08 February 2026 06:23:02 +0000 (0:00:01.088) 0:32:00.145 ******* 2026-02-08 06:23:30.354001 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:30.354010 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:30.354104 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:30.354113 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:30.354121 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:30.354129 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:30.354156 | orchestrator | 2026-02-08 06:23:30.354165 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-08 06:23:30.354174 | orchestrator | Sunday 08 February 2026 06:23:02 +0000 (0:00:00.661) 0:32:00.806 ******* 2026-02-08 06:23:30.354182 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.354189 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.354197 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.354205 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:30.354213 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:30.354221 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:30.354229 | orchestrator | 2026-02-08 06:23:30.354237 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-08 06:23:30.354244 | orchestrator | Sunday 08 February 2026 06:23:03 +0000 (0:00:00.956) 0:32:01.763 ******* 2026-02-08 06:23:30.354252 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:30.354260 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:30.354268 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:30.354276 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.354285 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.354295 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.354304 | orchestrator | 2026-02-08 06:23:30.354314 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-08 06:23:30.354323 | orchestrator | Sunday 08 February 2026 06:23:04 +0000 (0:00:00.653) 0:32:02.417 ******* 2026-02-08 06:23:30.354332 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:30.354342 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:30.354351 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:30.354360 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.354368 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.354377 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.354386 | orchestrator | 2026-02-08 06:23:30.354395 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-08 06:23:30.354404 | orchestrator | Sunday 08 February 2026 06:23:05 +0000 (0:00:00.981) 0:32:03.398 ******* 2026-02-08 06:23:30.354414 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:30.354423 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:30.354432 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:30.354441 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.354450 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.354458 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.354468 | orchestrator | 2026-02-08 06:23:30.354478 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-08 06:23:30.354487 | orchestrator | Sunday 08 February 2026 06:23:06 +0000 (0:00:00.654) 0:32:04.053 ******* 2026-02-08 06:23:30.354495 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:30.354505 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:30.354514 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:30.354523 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:30.354532 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:30.354540 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:30.354550 | orchestrator | 2026-02-08 06:23:30.354560 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-08 06:23:30.354569 | orchestrator | Sunday 08 February 2026 06:23:06 +0000 (0:00:00.929) 0:32:04.983 ******* 2026-02-08 06:23:30.354578 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:30.354586 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:30.354596 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:30.354605 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:30.354614 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:30.354622 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:30.354631 | orchestrator | 2026-02-08 06:23:30.354641 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-08 06:23:30.354651 | orchestrator | Sunday 08 February 2026 06:23:07 +0000 (0:00:00.650) 0:32:05.633 ******* 2026-02-08 06:23:30.354666 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.354674 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.354682 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.354690 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:30.354698 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:30.354706 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:30.354713 | orchestrator | 2026-02-08 06:23:30.354721 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-08 06:23:30.354729 | orchestrator | Sunday 08 February 2026 06:23:08 +0000 (0:00:00.958) 0:32:06.591 ******* 2026-02-08 06:23:30.354737 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.354745 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.354753 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.354761 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.354769 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.354791 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.354799 | orchestrator | 2026-02-08 06:23:30.354807 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-08 06:23:30.354815 | orchestrator | Sunday 08 February 2026 06:23:09 +0000 (0:00:00.670) 0:32:07.261 ******* 2026-02-08 06:23:30.354823 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.354831 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.354839 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.354846 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.354854 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.354862 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.354870 | orchestrator | 2026-02-08 06:23:30.354877 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-08 06:23:30.354885 | orchestrator | Sunday 08 February 2026 06:23:10 +0000 (0:00:01.448) 0:32:08.710 ******* 2026-02-08 06:23:30.354893 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.354901 | orchestrator | 2026-02-08 06:23:30.354909 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-08 06:23:30.354917 | orchestrator | Sunday 08 February 2026 06:23:12 +0000 (0:00:02.208) 0:32:10.919 ******* 2026-02-08 06:23:30.354925 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.354932 | orchestrator | 2026-02-08 06:23:30.354940 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-08 06:23:30.354948 | orchestrator | Sunday 08 February 2026 06:23:14 +0000 (0:00:02.079) 0:32:12.998 ******* 2026-02-08 06:23:30.354956 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.354964 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.354972 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.354979 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.354987 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.354995 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.355003 | orchestrator | 2026-02-08 06:23:30.355011 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-08 06:23:30.355018 | orchestrator | Sunday 08 February 2026 06:23:16 +0000 (0:00:01.833) 0:32:14.832 ******* 2026-02-08 06:23:30.355044 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.355053 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.355061 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.355068 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.355076 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.355084 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.355092 | orchestrator | 2026-02-08 06:23:30.355100 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-08 06:23:30.355107 | orchestrator | Sunday 08 February 2026 06:23:17 +0000 (0:00:01.114) 0:32:15.946 ******* 2026-02-08 06:23:30.355116 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-08 06:23:30.355126 | orchestrator | 2026-02-08 06:23:30.355134 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-08 06:23:30.355142 | orchestrator | Sunday 08 February 2026 06:23:19 +0000 (0:00:01.731) 0:32:17.678 ******* 2026-02-08 06:23:30.355155 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.355163 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.355171 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.355178 | orchestrator | ok: [testbed-node-3] 2026-02-08 06:23:30.355186 | orchestrator | ok: [testbed-node-4] 2026-02-08 06:23:30.355194 | orchestrator | ok: [testbed-node-5] 2026-02-08 06:23:30.355201 | orchestrator | 2026-02-08 06:23:30.355209 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-08 06:23:30.355217 | orchestrator | Sunday 08 February 2026 06:23:21 +0000 (0:00:01.538) 0:32:19.217 ******* 2026-02-08 06:23:30.355225 | orchestrator | changed: [testbed-node-1] 2026-02-08 06:23:30.355233 | orchestrator | changed: [testbed-node-0] 2026-02-08 06:23:30.355241 | orchestrator | changed: [testbed-node-3] 2026-02-08 06:23:30.355249 | orchestrator | changed: [testbed-node-4] 2026-02-08 06:23:30.355257 | orchestrator | changed: [testbed-node-5] 2026-02-08 06:23:30.355264 | orchestrator | changed: [testbed-node-2] 2026-02-08 06:23:30.355272 | orchestrator | 2026-02-08 06:23:30.355280 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-08 06:23:30.355288 | orchestrator | 2026-02-08 06:23:30.355295 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:23:30.355303 | orchestrator | Sunday 08 February 2026 06:23:25 +0000 (0:00:04.194) 0:32:23.412 ******* 2026-02-08 06:23:30.355311 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.355319 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.355327 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.355335 | orchestrator | 2026-02-08 06:23:30.355343 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:23:30.355350 | orchestrator | Sunday 08 February 2026 06:23:26 +0000 (0:00:00.684) 0:32:24.096 ******* 2026-02-08 06:23:30.355358 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.355366 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:30.355374 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:30.355382 | orchestrator | 2026-02-08 06:23:30.355390 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-08 06:23:30.355398 | orchestrator | Sunday 08 February 2026 06:23:26 +0000 (0:00:00.643) 0:32:24.740 ******* 2026-02-08 06:23:30.355406 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:30.355414 | orchestrator | 2026-02-08 06:23:30.355422 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-08 06:23:30.355429 | orchestrator | Sunday 08 February 2026 06:23:28 +0000 (0:00:01.382) 0:32:26.123 ******* 2026-02-08 06:23:30.355437 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:30.355445 | orchestrator | 2026-02-08 06:23:30.355453 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-08 06:23:30.355461 | orchestrator | 2026-02-08 06:23:30.355469 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-08 06:23:30.355477 | orchestrator | Sunday 08 February 2026 06:23:29 +0000 (0:00:01.512) 0:32:27.635 ******* 2026-02-08 06:23:30.355485 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:30.355493 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:30.355501 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:30.355508 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:30.355516 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:30.355524 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:30.355532 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:30.355540 | orchestrator | 2026-02-08 06:23:30.355553 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:23:52.690867 | orchestrator | Sunday 08 February 2026 06:23:30 +0000 (0:00:00.754) 0:32:28.390 ******* 2026-02-08 06:23:52.690980 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.690996 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.691008 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.691019 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:52.691110 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:52.691123 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:52.691134 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691146 | orchestrator | 2026-02-08 06:23:52.691158 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-08 06:23:52.691170 | orchestrator | Sunday 08 February 2026 06:23:32 +0000 (0:00:02.024) 0:32:30.414 ******* 2026-02-08 06:23:52.691180 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.691191 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.691202 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.691213 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:52.691224 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:52.691235 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:52.691245 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691256 | orchestrator | 2026-02-08 06:23:52.691267 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-08 06:23:52.691278 | orchestrator | Sunday 08 February 2026 06:23:33 +0000 (0:00:01.597) 0:32:32.011 ******* 2026-02-08 06:23:52.691289 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.691300 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.691311 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.691322 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:52.691333 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:52.691343 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:52.691354 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691365 | orchestrator | 2026-02-08 06:23:52.691376 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-08 06:23:52.691387 | orchestrator | Sunday 08 February 2026 06:23:35 +0000 (0:00:01.617) 0:32:33.629 ******* 2026-02-08 06:23:52.691398 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.691412 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.691425 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.691438 | orchestrator | skipping: [testbed-node-3] 2026-02-08 06:23:52.691450 | orchestrator | skipping: [testbed-node-4] 2026-02-08 06:23:52.691462 | orchestrator | skipping: [testbed-node-5] 2026-02-08 06:23:52.691475 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691488 | orchestrator | 2026-02-08 06:23:52.691501 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-08 06:23:52.691514 | orchestrator | 2026-02-08 06:23:52.691527 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-08 06:23:52.691540 | orchestrator | Sunday 08 February 2026 06:23:37 +0000 (0:00:02.120) 0:32:35.749 ******* 2026-02-08 06:23:52.691552 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-08 06:23:52.691566 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-08 06:23:52.691579 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-08 06:23:52.691591 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691604 | orchestrator | 2026-02-08 06:23:52.691617 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-08 06:23:52.691629 | orchestrator | Sunday 08 February 2026 06:23:37 +0000 (0:00:00.171) 0:32:35.921 ******* 2026-02-08 06:23:52.691642 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691654 | orchestrator | 2026-02-08 06:23:52.691667 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-08 06:23:52.691681 | orchestrator | Sunday 08 February 2026 06:23:38 +0000 (0:00:00.164) 0:32:36.086 ******* 2026-02-08 06:23:52.691693 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691706 | orchestrator | 2026-02-08 06:23:52.691718 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-08 06:23:52.691731 | orchestrator | Sunday 08 February 2026 06:23:38 +0000 (0:00:00.178) 0:32:36.265 ******* 2026-02-08 06:23:52.691743 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691764 | orchestrator | 2026-02-08 06:23:52.691777 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-08 06:23:52.691788 | orchestrator | Sunday 08 February 2026 06:23:38 +0000 (0:00:00.156) 0:32:36.421 ******* 2026-02-08 06:23:52.691798 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691809 | orchestrator | 2026-02-08 06:23:52.691820 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-08 06:23:52.691831 | orchestrator | Sunday 08 February 2026 06:23:38 +0000 (0:00:00.612) 0:32:37.034 ******* 2026-02-08 06:23:52.691842 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-08 06:23:52.691854 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-08 06:23:52.691864 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691875 | orchestrator | 2026-02-08 06:23:52.691886 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-08 06:23:52.691898 | orchestrator | Sunday 08 February 2026 06:23:39 +0000 (0:00:00.189) 0:32:37.223 ******* 2026-02-08 06:23:52.691909 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691920 | orchestrator | 2026-02-08 06:23:52.691930 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-08 06:23:52.691941 | orchestrator | Sunday 08 February 2026 06:23:39 +0000 (0:00:00.173) 0:32:37.397 ******* 2026-02-08 06:23:52.691952 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.691963 | orchestrator | 2026-02-08 06:23:52.691974 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-08 06:23:52.691985 | orchestrator | Sunday 08 February 2026 06:23:39 +0000 (0:00:00.164) 0:32:37.561 ******* 2026-02-08 06:23:52.691995 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.692006 | orchestrator | 2026-02-08 06:23:52.692017 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-08 06:23:52.692028 | orchestrator | Sunday 08 February 2026 06:23:39 +0000 (0:00:00.164) 0:32:37.726 ******* 2026-02-08 06:23:52.692054 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-08 06:23:52.692083 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-08 06:23:52.692094 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.692105 | orchestrator | 2026-02-08 06:23:52.692116 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-08 06:23:52.692127 | orchestrator | Sunday 08 February 2026 06:23:39 +0000 (0:00:00.179) 0:32:37.905 ******* 2026-02-08 06:23:52.692138 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.692149 | orchestrator | 2026-02-08 06:23:52.692160 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-08 06:23:52.692170 | orchestrator | Sunday 08 February 2026 06:23:39 +0000 (0:00:00.142) 0:32:38.048 ******* 2026-02-08 06:23:52.692181 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.692192 | orchestrator | 2026-02-08 06:23:52.692202 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-08 06:23:52.692213 | orchestrator | Sunday 08 February 2026 06:23:40 +0000 (0:00:00.615) 0:32:38.663 ******* 2026-02-08 06:23:52.692224 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.692235 | orchestrator | 2026-02-08 06:23:52.692245 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-08 06:23:52.692256 | orchestrator | Sunday 08 February 2026 06:23:40 +0000 (0:00:00.189) 0:32:38.853 ******* 2026-02-08 06:23:52.692267 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:23:52.692278 | orchestrator | 2026-02-08 06:23:52.692288 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-08 06:23:52.692299 | orchestrator | 2026-02-08 06:23:52.692310 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-08 06:23:52.692321 | orchestrator | Sunday 08 February 2026 06:23:41 +0000 (0:00:00.849) 0:32:39.702 ******* 2026-02-08 06:23:52.692331 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.692342 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.692360 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.692370 | orchestrator | 2026-02-08 06:23:52.692381 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-08 06:23:52.692392 | orchestrator | Sunday 08 February 2026 06:23:42 +0000 (0:00:00.869) 0:32:40.572 ******* 2026-02-08 06:23:52.692403 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.692414 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.692425 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.692435 | orchestrator | 2026-02-08 06:23:52.692446 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-08 06:23:52.692457 | orchestrator | Sunday 08 February 2026 06:23:42 +0000 (0:00:00.361) 0:32:40.934 ******* 2026-02-08 06:23:52.692468 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.692479 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.692489 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.692500 | orchestrator | 2026-02-08 06:23:52.692511 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-08 06:23:52.692522 | orchestrator | Sunday 08 February 2026 06:23:43 +0000 (0:00:00.362) 0:32:41.296 ******* 2026-02-08 06:23:52.692533 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.692544 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.692554 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.692565 | orchestrator | 2026-02-08 06:23:52.692576 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-08 06:23:52.692587 | orchestrator | Sunday 08 February 2026 06:23:43 +0000 (0:00:00.335) 0:32:41.632 ******* 2026-02-08 06:23:52.692597 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.692608 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.692619 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.692629 | orchestrator | 2026-02-08 06:23:52.692641 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-08 06:23:52.692651 | orchestrator | Sunday 08 February 2026 06:23:44 +0000 (0:00:00.852) 0:32:42.485 ******* 2026-02-08 06:23:52.692662 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.692673 | orchestrator | skipping: [testbed-node-1] 2026-02-08 06:23:52.692684 | orchestrator | skipping: [testbed-node-2] 2026-02-08 06:23:52.692694 | orchestrator | 2026-02-08 06:23:52.692705 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-08 06:23:52.692716 | orchestrator | Sunday 08 February 2026 06:23:44 +0000 (0:00:00.350) 0:32:42.835 ******* 2026-02-08 06:23:52.692727 | orchestrator | skipping: [testbed-node-0] 2026-02-08 06:23:52.692738 | orchestrator | 2026-02-08 06:23:52.692748 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-08 06:23:52.692759 | orchestrator | 2026-02-08 06:23:52.692770 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-08 06:23:52.692780 | orchestrator | Sunday 08 February 2026 06:23:45 +0000 (0:00:00.877) 0:32:43.713 ******* 2026-02-08 06:23:52.692791 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:52.692803 | orchestrator | 2026-02-08 06:23:52.692814 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-08 06:23:52.692824 | orchestrator | Sunday 08 February 2026 06:23:46 +0000 (0:00:00.457) 0:32:44.170 ******* 2026-02-08 06:23:52.692835 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:52.692846 | orchestrator | 2026-02-08 06:23:52.692857 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-08 06:23:52.692868 | orchestrator | Sunday 08 February 2026 06:23:46 +0000 (0:00:00.279) 0:32:44.449 ******* 2026-02-08 06:23:52.692879 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:52.692889 | orchestrator | 2026-02-08 06:23:52.692900 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-08 06:23:52.692911 | orchestrator | Sunday 08 February 2026 06:23:46 +0000 (0:00:00.463) 0:32:44.913 ******* 2026-02-08 06:23:52.692922 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:52.692933 | orchestrator | 2026-02-08 06:23:52.692943 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-08 06:23:52.692964 | orchestrator | Sunday 08 February 2026 06:23:48 +0000 (0:00:02.062) 0:32:46.975 ******* 2026-02-08 06:23:52.692975 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:52.692986 | orchestrator | 2026-02-08 06:23:52.692997 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-08 06:23:52.693008 | orchestrator | Sunday 08 February 2026 06:23:51 +0000 (0:00:02.624) 0:32:49.600 ******* 2026-02-08 06:23:52.693025 | orchestrator | changed: [testbed-node-0] 2026-02-08 06:23:57.590889 | orchestrator | 2026-02-08 06:23:57.590994 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-08 06:23:57.591011 | orchestrator | 2026-02-08 06:23:57.591024 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-08 06:23:57.591102 | orchestrator | Sunday 08 February 2026 06:23:52 +0000 (0:00:01.124) 0:32:50.724 ******* 2026-02-08 06:23:57.591115 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:57.591127 | orchestrator | ok: [testbed-node-1] 2026-02-08 06:23:57.591138 | orchestrator | ok: [testbed-node-2] 2026-02-08 06:23:57.591150 | orchestrator | 2026-02-08 06:23:57.591161 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-08 06:23:57.591173 | orchestrator | Sunday 08 February 2026 06:23:53 +0000 (0:00:00.469) 0:32:51.194 ******* 2026-02-08 06:23:57.591184 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:57.591195 | orchestrator | 2026-02-08 06:23:57.591207 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-08 06:23:57.591217 | orchestrator | Sunday 08 February 2026 06:23:54 +0000 (0:00:01.204) 0:32:52.399 ******* 2026-02-08 06:23:57.591227 | orchestrator | ok: [testbed-node-0] 2026-02-08 06:23:57.591238 | orchestrator | 2026-02-08 06:23:57.591248 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 06:23:57.591259 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-08 06:23:57.591271 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-08 06:23:57.591284 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-02-08 06:23:57.591295 | orchestrator | testbed-node-1 : ok=191  changed=16  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-02-08 06:23:57.591307 | orchestrator | testbed-node-2 : ok=196  changed=15  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-02-08 06:23:57.591318 | orchestrator | testbed-node-3 : ok=316  changed=22  unreachable=0 failed=0 skipped=362  rescued=0 ignored=0 2026-02-08 06:23:57.591329 | orchestrator | testbed-node-4 : ok=302  changed=18  unreachable=0 failed=0 skipped=345  rescued=0 ignored=0 2026-02-08 06:23:57.591341 | orchestrator | testbed-node-5 : ok=309  changed=17  unreachable=0 failed=0 skipped=358  rescued=0 ignored=0 2026-02-08 06:23:57.591351 | orchestrator | 2026-02-08 06:23:57.591363 | orchestrator | 2026-02-08 06:23:57.591375 | orchestrator | 2026-02-08 06:23:57.591387 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 06:23:57.591399 | orchestrator | Sunday 08 February 2026 06:23:56 +0000 (0:00:02.383) 0:32:54.783 ******* 2026-02-08 06:23:57.591414 | orchestrator | =============================================================================== 2026-02-08 06:23:57.591427 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 74.68s 2026-02-08 06:23:57.591442 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 74.42s 2026-02-08 06:23:57.591459 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 37.13s 2026-02-08 06:23:57.591498 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.03s 2026-02-08 06:23:57.591511 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.64s 2026-02-08 06:23:57.591523 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.15s 2026-02-08 06:23:57.591534 | orchestrator | Gather and delegate facts ---------------------------------------------- 31.12s 2026-02-08 06:23:57.591546 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 26.83s 2026-02-08 06:23:57.591557 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 26.61s 2026-02-08 06:23:57.591569 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.96s 2026-02-08 06:23:57.591580 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 21.17s 2026-02-08 06:23:57.591591 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.89s 2026-02-08 06:23:57.591602 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.93s 2026-02-08 06:23:57.591613 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 11.28s 2026-02-08 06:23:57.591624 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 11.14s 2026-02-08 06:23:57.591635 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 11.07s 2026-02-08 06:23:57.591646 | orchestrator | Set cluster configs ---------------------------------------------------- 10.10s 2026-02-08 06:23:57.591657 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 10.01s 2026-02-08 06:23:57.591668 | orchestrator | Stop standby ceph mds --------------------------------------------------- 9.23s 2026-02-08 06:23:57.591680 | orchestrator | Stop ceph osd ----------------------------------------------------------- 9.14s 2026-02-08 06:23:57.922320 | orchestrator | + osism apply cephclient 2026-02-08 06:24:00.093754 | orchestrator | 2026-02-08 06:24:00 | INFO  | Task 8add2da5-5b95-4f1d-8064-032f3396f2a1 (cephclient) was prepared for execution. 2026-02-08 06:24:00.093856 | orchestrator | 2026-02-08 06:24:00 | INFO  | It takes a moment until task 8add2da5-5b95-4f1d-8064-032f3396f2a1 (cephclient) has been started and output is visible here. 2026-02-08 06:24:18.661024 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-08 06:24:18.661186 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-08 06:24:18.661208 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-08 06:24:18.661215 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-08 06:24:18.661229 | orchestrator | 2026-02-08 06:24:18.661237 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-08 06:24:18.661245 | orchestrator | 2026-02-08 06:24:18.661252 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-08 06:24:18.661259 | orchestrator | Sunday 08 February 2026 06:24:06 +0000 (0:00:01.484) 0:00:01.484 ******* 2026-02-08 06:24:18.661266 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-08 06:24:18.661274 | orchestrator | 2026-02-08 06:24:18.661282 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-08 06:24:18.661289 | orchestrator | Sunday 08 February 2026 06:24:06 +0000 (0:00:00.796) 0:00:02.281 ******* 2026-02-08 06:24:18.661296 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-08 06:24:18.661303 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-08 06:24:18.661310 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-08 06:24:18.661318 | orchestrator | 2026-02-08 06:24:18.661343 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-08 06:24:18.661350 | orchestrator | Sunday 08 February 2026 06:24:08 +0000 (0:00:01.699) 0:00:03.981 ******* 2026-02-08 06:24:18.661357 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-08 06:24:18.661364 | orchestrator | 2026-02-08 06:24:18.661371 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-08 06:24:18.661377 | orchestrator | Sunday 08 February 2026 06:24:09 +0000 (0:00:01.144) 0:00:05.125 ******* 2026-02-08 06:24:18.661384 | orchestrator | ok: [testbed-manager] 2026-02-08 06:24:18.661391 | orchestrator | 2026-02-08 06:24:18.661397 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-08 06:24:18.661404 | orchestrator | Sunday 08 February 2026 06:24:10 +0000 (0:00:00.933) 0:00:06.059 ******* 2026-02-08 06:24:18.661410 | orchestrator | ok: [testbed-manager] 2026-02-08 06:24:18.661417 | orchestrator | 2026-02-08 06:24:18.661424 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-08 06:24:18.661431 | orchestrator | Sunday 08 February 2026 06:24:11 +0000 (0:00:00.884) 0:00:06.944 ******* 2026-02-08 06:24:18.661437 | orchestrator | ok: [testbed-manager] 2026-02-08 06:24:18.661444 | orchestrator | 2026-02-08 06:24:18.661451 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-08 06:24:18.661457 | orchestrator | Sunday 08 February 2026 06:24:12 +0000 (0:00:01.085) 0:00:08.029 ******* 2026-02-08 06:24:18.661464 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-08 06:24:18.661471 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-08 06:24:18.661478 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-08 06:24:18.661484 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-08 06:24:18.661491 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-08 06:24:18.661498 | orchestrator | 2026-02-08 06:24:18.661504 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-08 06:24:18.661511 | orchestrator | Sunday 08 February 2026 06:24:16 +0000 (0:00:03.882) 0:00:11.912 ******* 2026-02-08 06:24:18.661518 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-08 06:24:18.661524 | orchestrator | 2026-02-08 06:24:18.661531 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-08 06:24:18.661537 | orchestrator | Sunday 08 February 2026 06:24:17 +0000 (0:00:00.458) 0:00:12.370 ******* 2026-02-08 06:24:18.661544 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:24:18.661551 | orchestrator | 2026-02-08 06:24:18.661558 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-08 06:24:18.661564 | orchestrator | Sunday 08 February 2026 06:24:17 +0000 (0:00:00.157) 0:00:12.527 ******* 2026-02-08 06:24:18.661572 | orchestrator | skipping: [testbed-manager] 2026-02-08 06:24:18.661580 | orchestrator | 2026-02-08 06:24:18.661588 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-08 06:24:18.661596 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-08 06:24:18.661604 | orchestrator | 2026-02-08 06:24:18.661611 | orchestrator | 2026-02-08 06:24:18.661618 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-08 06:24:18.661625 | orchestrator | Sunday 08 February 2026 06:24:18 +0000 (0:00:01.113) 0:00:13.641 ******* 2026-02-08 06:24:18.661633 | orchestrator | =============================================================================== 2026-02-08 06:24:18.661640 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.88s 2026-02-08 06:24:18.661647 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.70s 2026-02-08 06:24:18.661654 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.14s 2026-02-08 06:24:18.661662 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.11s 2026-02-08 06:24:18.661670 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.09s 2026-02-08 06:24:18.661682 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.93s 2026-02-08 06:24:18.661702 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2026-02-08 06:24:18.661709 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.80s 2026-02-08 06:24:18.661717 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2026-02-08 06:24:18.661724 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-02-08 06:24:19.028587 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-08 06:24:19.028685 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-08 06:24:19.033141 | orchestrator | + set -e 2026-02-08 06:24:19.033201 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-08 06:24:19.033225 | orchestrator | ++ export INTERACTIVE=false 2026-02-08 06:24:19.033247 | orchestrator | ++ INTERACTIVE=false 2026-02-08 06:24:19.033265 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-08 06:24:19.033281 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-08 06:24:19.033292 | orchestrator | + source /opt/manager-vars.sh 2026-02-08 06:24:19.033303 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-08 06:24:19.033314 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-08 06:24:19.033324 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-08 06:24:19.033335 | orchestrator | ++ CEPH_VERSION=reef 2026-02-08 06:24:19.033346 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-08 06:24:19.033357 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-08 06:24:19.033367 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-08 06:24:19.033378 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-08 06:24:19.033389 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-08 06:24:19.033400 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-08 06:24:19.033411 | orchestrator | ++ export ARA=false 2026-02-08 06:24:19.033421 | orchestrator | ++ ARA=false 2026-02-08 06:24:19.033432 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-08 06:24:19.033443 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-08 06:24:19.033454 | orchestrator | ++ export TEMPEST=false 2026-02-08 06:24:19.033464 | orchestrator | ++ TEMPEST=false 2026-02-08 06:24:19.033476 | orchestrator | ++ export IS_ZUUL=true 2026-02-08 06:24:19.033487 | orchestrator | ++ IS_ZUUL=true 2026-02-08 06:24:19.033497 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 06:24:19.033508 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.37 2026-02-08 06:24:19.033519 | orchestrator | ++ export EXTERNAL_API=false 2026-02-08 06:24:19.033530 | orchestrator | ++ EXTERNAL_API=false 2026-02-08 06:24:19.033540 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-08 06:24:19.033551 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-08 06:24:19.033561 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-08 06:24:19.033572 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-08 06:24:19.033583 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-08 06:24:19.033593 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-08 06:24:19.033604 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-08 06:24:19.033614 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-08 06:24:19.033625 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-08 06:24:19.034539 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-08 06:24:19.037626 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-08 06:24:19.037678 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-08 06:24:19.037691 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-08 06:24:19.037702 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-08 06:24:41.273284 | orchestrator | 2026-02-08 06:24:41 | ERROR  | Unable to get ansible vault password 2026-02-08 06:24:41.273379 | orchestrator | 2026-02-08 06:24:41 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-08 06:24:41.273388 | orchestrator | 2026-02-08 06:24:41 | ERROR  | Dropping encrypted entries 2026-02-08 06:24:41.306848 | orchestrator | 2026-02-08 06:24:41 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-08 06:24:41.307548 | orchestrator | 2026-02-08 06:24:41 | INFO  | Kolla configuration check passed 2026-02-08 06:24:41.490282 | orchestrator | 2026-02-08 06:24:41 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-08 06:24:41.505288 | orchestrator | 2026-02-08 06:24:41 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-08 06:24:41.846468 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-08 06:25:02.753217 | orchestrator | 2026-02-08 06:25:02 | ERROR  | Unable to get ansible vault password 2026-02-08 06:25:02.753334 | orchestrator | 2026-02-08 06:25:02 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-08 06:25:02.753351 | orchestrator | 2026-02-08 06:25:02 | ERROR  | Dropping encrypted entries 2026-02-08 06:25:02.784527 | orchestrator | 2026-02-08 06:25:02 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-08 06:25:02.928117 | orchestrator | 2026-02-08 06:25:02 | INFO  | Found 205 classic queue(s) in vhost '/': 2026-02-08 06:25:02.928213 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-08 06:25:02.928228 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-08 06:25:02.928240 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-08 06:25:02.928252 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-08 06:25:02.928265 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - barbican.workers_fanout_347da8932d684bdfac52257a28ab340f (vhost: /, messages: 0) 2026-02-08 06:25:02.928278 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - barbican.workers_fanout_4eb185057bc6439cbd5c28b6dd046d18 (vhost: /, messages: 0) 2026-02-08 06:25:02.928290 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - barbican.workers_fanout_6e28fc6434fa4c6c895b3f24999a4c1a (vhost: /, messages: 0) 2026-02-08 06:25:02.928301 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-08 06:25:02.928312 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central (vhost: /, messages: 1) 2026-02-08 06:25:02.931029 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.931106 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.931119 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.931130 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central_fanout_03fdedc1efed4e1eac17de91d4fa5f0e (vhost: /, messages: 0) 2026-02-08 06:25:02.931225 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central_fanout_183200fa6fc0449ca4cb7eedd7d4df14 (vhost: /, messages: 0) 2026-02-08 06:25:02.931258 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central_fanout_23d10be2c0ec403a94c2612dc2016ca3 (vhost: /, messages: 0) 2026-02-08 06:25:02.931321 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central_fanout_752d35e507da421bbb656c2f26ededad (vhost: /, messages: 0) 2026-02-08 06:25:02.931335 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central_fanout_76c2f11575b14fd2a2598e37946ec498 (vhost: /, messages: 0) 2026-02-08 06:25:02.931347 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - central_fanout_a401579398be4bad8f324a22c6fcc6a6 (vhost: /, messages: 0) 2026-02-08 06:25:02.931358 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-08 06:25:02.931370 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.931410 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.931422 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.931433 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-backup_fanout_a852cc8667d341a7b1a626c760ea7127 (vhost: /, messages: 0) 2026-02-08 06:25:02.931444 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-backup_fanout_b324387efac9442e998a13efbc4b7b92 (vhost: /, messages: 0) 2026-02-08 06:25:02.931461 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-backup_fanout_d794dd21ac5a4c2481eb1a974638ecc3 (vhost: /, messages: 0) 2026-02-08 06:25:02.931480 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-08 06:25:02.931608 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.931625 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.931636 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.931647 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-scheduler_fanout_113667dc32664dc8b33b0b38a4a7b835 (vhost: /, messages: 0) 2026-02-08 06:25:02.931658 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-scheduler_fanout_2cb4eea31a06468ba8cc80177ea13b09 (vhost: /, messages: 0) 2026-02-08 06:25:02.931669 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-scheduler_fanout_8a4d92e196cb480d90b3e00ab49d82c4 (vhost: /, messages: 0) 2026-02-08 06:25:02.931680 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-08 06:25:02.931691 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-08 06:25:02.931702 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.931769 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_47d981e5bc01470587cf3c8730ee5370 (vhost: /, messages: 0) 2026-02-08 06:25:02.931784 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-08 06:25:02.931795 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.931806 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_4e515dafa29d4ca18e18af0dd9903a07 (vhost: /, messages: 0) 2026-02-08 06:25:02.931817 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-08 06:25:02.931870 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.931993 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_78a5eab29c4c488c82ef26629265ff56 (vhost: /, messages: 0) 2026-02-08 06:25:02.932019 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume_fanout_02f06b8d70db4ec5b9452a51b587db66 (vhost: /, messages: 0) 2026-02-08 06:25:02.932036 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume_fanout_a3d0f5574b0b48e493d646589fe7e2d9 (vhost: /, messages: 0) 2026-02-08 06:25:02.932167 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - cinder-volume_fanout_b7af0f84e2964c66941e46cfed8a721c (vhost: /, messages: 0) 2026-02-08 06:25:02.932207 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-08 06:25:02.932331 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-08 06:25:02.932370 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-08 06:25:02.932389 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-08 06:25:02.932407 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - compute_fanout_a8010c7c8d0f4adbb7ee3eb2191b5456 (vhost: /, messages: 0) 2026-02-08 06:25:02.932597 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - compute_fanout_be7950615ce244a9b1977ba9d08630e7 (vhost: /, messages: 0) 2026-02-08 06:25:02.932625 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - compute_fanout_fd759331d75047a7823f6f102aaaa214 (vhost: /, messages: 0) 2026-02-08 06:25:02.932830 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-08 06:25:02.932940 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.933116 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.933138 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.933265 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor_fanout_2229dd23becd41d88449426002db262b (vhost: /, messages: 0) 2026-02-08 06:25:02.933403 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor_fanout_4385409a5d564657810bb6a1072c5027 (vhost: /, messages: 0) 2026-02-08 06:25:02.934296 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor_fanout_4d490050d73d4a4ba54778ba7ab96793 (vhost: /, messages: 0) 2026-02-08 06:25:02.934407 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor_fanout_ab936e6c0042456ea0f28893363d2c6d (vhost: /, messages: 0) 2026-02-08 06:25:02.934426 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor_fanout_b1941e3c71724c87b796f988b64625d4 (vhost: /, messages: 0) 2026-02-08 06:25:02.934439 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - conductor_fanout_bb464b1fba6f46a89c3b5ddb9dac2819 (vhost: /, messages: 0) 2026-02-08 06:25:02.934451 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - event.sample (vhost: /, messages: 5) 2026-02-08 06:25:02.934463 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-08 06:25:02.934702 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor.5kaxgy54t6eg (vhost: /, messages: 0) 2026-02-08 06:25:02.934746 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor.pzpmwri7hqjj (vhost: /, messages: 0) 2026-02-08 06:25:02.934758 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor.rsaypnsvdb6c (vhost: /, messages: 0) 2026-02-08 06:25:02.934769 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor_fanout_1093017b39d14898bb289bf649c45b4a (vhost: /, messages: 0) 2026-02-08 06:25:02.934782 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor_fanout_2033835c71b845e4a5bf99a176e106bd (vhost: /, messages: 0) 2026-02-08 06:25:02.934793 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor_fanout_274934d0dfda45b185ee564f325e8417 (vhost: /, messages: 0) 2026-02-08 06:25:02.934963 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor_fanout_394c5577b6aa415fa787e485fa2aa6d9 (vhost: /, messages: 0) 2026-02-08 06:25:02.934982 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor_fanout_6011821ead704260ae0bf91e79df2ca8 (vhost: /, messages: 0) 2026-02-08 06:25:02.935013 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor_fanout_701535c552c8475d8148e091d72206bc (vhost: /, messages: 0) 2026-02-08 06:25:02.935025 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor_fanout_751a9f9f542a493685e2c9cc4647c84f (vhost: /, messages: 0) 2026-02-08 06:25:02.935177 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor_fanout_90f63e0770ee4ebb928c084a7b0324f8 (vhost: /, messages: 0) 2026-02-08 06:25:02.935273 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - magnum-conductor_fanout_dc60b2de01024815ad4e8d42a65f7625 (vhost: /, messages: 0) 2026-02-08 06:25:02.935286 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-08 06:25:02.935298 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.935696 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.935715 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.935856 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-data_fanout_216f33503db3496d84875d19a5204aba (vhost: /, messages: 0) 2026-02-08 06:25:02.935930 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-data_fanout_809ebbf131b7421daa2053cadc6573fb (vhost: /, messages: 0) 2026-02-08 06:25:02.936238 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-data_fanout_9a37ee5add494ce6acdc25204e02fd3b (vhost: /, messages: 0) 2026-02-08 06:25:02.936804 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-08 06:25:02.937110 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.937132 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.937144 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.937156 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-scheduler_fanout_4157d271e85041229dd91677690184be (vhost: /, messages: 0) 2026-02-08 06:25:02.937179 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-scheduler_fanout_43e8a83bcb9644aa9938b2fb71b357c3 (vhost: /, messages: 0) 2026-02-08 06:25:02.937191 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-scheduler_fanout_addb4477520a409d9705c6bbc2b32fca (vhost: /, messages: 0) 2026-02-08 06:25:02.937202 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-08 06:25:02.937383 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-08 06:25:02.937609 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-08 06:25:02.937634 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-08 06:25:02.937942 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-share_fanout_092284e87d844fc09a165be02877fbae (vhost: /, messages: 0) 2026-02-08 06:25:02.937963 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - manila-share_fanout_ff8f64e05f0149c69eff3d7c3e52a7f1 (vhost: /, messages: 0) 2026-02-08 06:25:02.937975 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-08 06:25:02.937987 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-08 06:25:02.938204 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-08 06:25:02.938304 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-08 06:25:02.938471 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-08 06:25:02.938490 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-08 06:25:02.938821 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-08 06:25:02.939016 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-08 06:25:02.939035 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.939074 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.939192 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.939354 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - octavia_provisioning_v2_fanout_4ce20adecfed45fdba3f6b623bc81423 (vhost: /, messages: 0) 2026-02-08 06:25:02.940571 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - octavia_provisioning_v2_fanout_e3858786f4be49d6ad46945fffbe2fff (vhost: /, messages: 0) 2026-02-08 06:25:02.940597 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-08 06:25:02.940610 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.940621 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.940632 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.940643 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer_fanout_6dd33ad512324b4b9788b2e315ee678d (vhost: /, messages: 0) 2026-02-08 06:25:02.940655 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer_fanout_6f820e090bb14864bf7f06a1e833c009 (vhost: /, messages: 0) 2026-02-08 06:25:02.940666 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer_fanout_7469238b0b4b4e50a0f87fd1dcd8bf46 (vhost: /, messages: 0) 2026-02-08 06:25:02.940755 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer_fanout_768e464c56d141628280c96ecbedbe9f (vhost: /, messages: 0) 2026-02-08 06:25:02.940768 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer_fanout_87464b15306040d7a605c6bdd05596f3 (vhost: /, messages: 0) 2026-02-08 06:25:02.940779 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - producer_fanout_f1b95467017c4ba7ba4e070a3b5333b5 (vhost: /, messages: 0) 2026-02-08 06:25:02.940790 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-08 06:25:02.940801 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.940812 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.940823 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.940888 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin_fanout_153ee92cfa2447398cb7360be844f327 (vhost: /, messages: 0) 2026-02-08 06:25:02.940902 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin_fanout_2469cad74a4a41268093ea16758c40bb (vhost: /, messages: 0) 2026-02-08 06:25:02.940913 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin_fanout_32f96312e0e84b44a4e7d37317ed249f (vhost: /, messages: 0) 2026-02-08 06:25:02.940942 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin_fanout_4da7d2089e954e89a2a532fb5bb8e9f5 (vhost: /, messages: 0) 2026-02-08 06:25:02.940953 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin_fanout_5180bc57dfde4b00b268d6faf9dfffd8 (vhost: /, messages: 0) 2026-02-08 06:25:02.940970 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin_fanout_93c38a1cb3444159aa91af77a949305a (vhost: /, messages: 0) 2026-02-08 06:25:02.940982 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin_fanout_a88277f47544451ab556d7bd78eb10ae (vhost: /, messages: 0) 2026-02-08 06:25:02.940993 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin_fanout_c51cddffd0a043ed8aa3a8ddb4b90bf4 (vhost: /, messages: 0) 2026-02-08 06:25:02.941003 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-plugin_fanout_f56685ec2e874bf2b51d3415531b58e6 (vhost: /, messages: 0) 2026-02-08 06:25:02.941014 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-08 06:25:02.941025 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.941037 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.941067 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.941233 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_059fded33e0c49449a62af1a17152f03 (vhost: /, messages: 0) 2026-02-08 06:25:02.941369 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_21042ce9c5e94a82acaa201350c9c34e (vhost: /, messages: 0) 2026-02-08 06:25:02.941385 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_35e35c4d2ddb46a680615a758078ef34 (vhost: /, messages: 0) 2026-02-08 06:25:02.941401 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_3eb6635f3bfc47aaace5bd87e335d9a8 (vhost: /, messages: 0) 2026-02-08 06:25:02.941412 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_495f16e3b511444182c8f85d463d2e01 (vhost: /, messages: 0) 2026-02-08 06:25:02.941423 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_5257d4214e344ec9b20ce485cfd866f1 (vhost: /, messages: 0) 2026-02-08 06:25:02.941434 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_55fd39e768e54147bdaf2844c80207ef (vhost: /, messages: 0) 2026-02-08 06:25:02.941821 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_75a98fdeaa754bbab447b99184dcf810 (vhost: /, messages: 0) 2026-02-08 06:25:02.941840 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_7ef242312a01479b9a5b726c851948bb (vhost: /, messages: 0) 2026-02-08 06:25:02.941851 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_8eba51d6c4e84d3695e46f5a13a1a05a (vhost: /, messages: 0) 2026-02-08 06:25:02.942279 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_a361770c0fb745ffa8e90756189c72ad (vhost: /, messages: 0) 2026-02-08 06:25:02.942304 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_a6b128cda936452bb00b3983e082c2e4 (vhost: /, messages: 0) 2026-02-08 06:25:02.942316 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_aabfb51b78d64f19ae88fb4cccb549df (vhost: /, messages: 0) 2026-02-08 06:25:02.942327 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_beda18e6e0e74e8892877176e79e31f4 (vhost: /, messages: 0) 2026-02-08 06:25:02.942338 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_ca00d7b43f734422b99d7c118b4c3df4 (vhost: /, messages: 0) 2026-02-08 06:25:02.942362 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_ce1d0ad220c74400be018f509b450d7d (vhost: /, messages: 0) 2026-02-08 06:25:02.942373 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_d3ce02d19bd44fcd9c84348bf0dbb99d (vhost: /, messages: 0) 2026-02-08 06:25:02.942384 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-reports-plugin_fanout_ff57bda6a2164776aa0e713373515b1b (vhost: /, messages: 0) 2026-02-08 06:25:02.942586 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-08 06:25:02.942605 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.942616 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.942627 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.942638 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions_fanout_256a4c74522d42a1b012555d06318dc1 (vhost: /, messages: 0) 2026-02-08 06:25:02.942856 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions_fanout_3ebed68f71304ef19b2417d90ffbd00a (vhost: /, messages: 0) 2026-02-08 06:25:02.942875 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions_fanout_aad2a4d9955e49da99f0573d17fdc945 (vhost: /, messages: 0) 2026-02-08 06:25:02.942886 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions_fanout_b068f257ffdc400a9136ca5c0b7c4f06 (vhost: /, messages: 0) 2026-02-08 06:25:02.942898 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions_fanout_c4cfe6e5533e4c39a0b47ce93d22a54b (vhost: /, messages: 0) 2026-02-08 06:25:02.942909 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions_fanout_c5272d7c24194cf9aa972abed483749c (vhost: /, messages: 0) 2026-02-08 06:25:02.942920 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions_fanout_cc8e6dd3aa1246908db51c7da42f922f (vhost: /, messages: 0) 2026-02-08 06:25:02.942931 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions_fanout_d4419fa9172942e49b12ca4e94f9888c (vhost: /, messages: 0) 2026-02-08 06:25:02.943212 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - q-server-resource-versions_fanout_ed249743f2de4809994bdb51d06d1876 (vhost: /, messages: 0) 2026-02-08 06:25:02.943231 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_0637a9c4ccf4421bb391ef0e324416e3 (vhost: /, messages: 0) 2026-02-08 06:25:02.943242 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_0b0605fca3f44c869ff29ca0b502ede6 (vhost: /, messages: 0) 2026-02-08 06:25:02.943253 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_1722bdfa84954695ad701dadea694877 (vhost: /, messages: 0) 2026-02-08 06:25:02.943264 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_2a5ae1685095435c88de3010bd583c0f (vhost: /, messages: 0) 2026-02-08 06:25:02.943487 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_2c77505375d0486997b979bf13b28458 (vhost: /, messages: 0) 2026-02-08 06:25:02.943506 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_2f5bf5d398bc42ae8803f377356708b1 (vhost: /, messages: 0) 2026-02-08 06:25:02.943518 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_328dde6c15ab488b8a3fc3259a99d419 (vhost: /, messages: 0) 2026-02-08 06:25:02.943529 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_3313b5c71afb4cff856707c0a03a85a3 (vhost: /, messages: 0) 2026-02-08 06:25:02.943728 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_45f9a9266fae43f182d866ad8d33be85 (vhost: /, messages: 0) 2026-02-08 06:25:02.943745 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_461b1cce35c8428cbf43231550816c97 (vhost: /, messages: 0) 2026-02-08 06:25:02.943755 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_5944412d539640fe9acbcf53f3b5eca8 (vhost: /, messages: 0) 2026-02-08 06:25:02.943766 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_9485bd29989245e3a2095c2d01f65080 (vhost: /, messages: 0) 2026-02-08 06:25:02.943777 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_a369a569182445b5ae0a5758ea3a8377 (vhost: /, messages: 0) 2026-02-08 06:25:02.944024 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_acff8695301b486c911fbd0fcc3401ec (vhost: /, messages: 0) 2026-02-08 06:25:02.944044 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_b343380cbc464d61a25f3a3c285f74b4 (vhost: /, messages: 0) 2026-02-08 06:25:02.944087 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_e379d8ddbb094cf1a095e206d0e394f6 (vhost: /, messages: 0) 2026-02-08 06:25:02.944099 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_efc3056784eb4323b62b55b3e54b8df8 (vhost: /, messages: 0) 2026-02-08 06:25:02.944109 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_f440442756234276bd5f438756f0a893 (vhost: /, messages: 0) 2026-02-08 06:25:02.944302 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - reply_f9e91e4d902a4b07b6c4d43b3bf4ee85 (vhost: /, messages: 0) 2026-02-08 06:25:02.944322 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-08 06:25:02.944333 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.944637 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.944657 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.944668 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler_fanout_003a178621ed42b785aeb7244d51d9ec (vhost: /, messages: 0) 2026-02-08 06:25:02.944679 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler_fanout_010beed9d2e74bf4bea090549bd42959 (vhost: /, messages: 0) 2026-02-08 06:25:02.944690 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler_fanout_5ee36f3d9cf046d6806e76a5eea0e4e9 (vhost: /, messages: 0) 2026-02-08 06:25:02.944701 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler_fanout_7957941ccd164410a3f652248c754814 (vhost: /, messages: 0) 2026-02-08 06:25:02.945220 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler_fanout_ac8dcdda12d84da59fae9c8c1ebe04b9 (vhost: /, messages: 0) 2026-02-08 06:25:02.945245 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - scheduler_fanout_b7c59b464c294505a9a00bcb8d0a78b7 (vhost: /, messages: 0) 2026-02-08 06:25:02.945257 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-08 06:25:02.945268 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-08 06:25:02.945279 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-08 06:25:02.945290 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-08 06:25:02.945301 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - worker_fanout_133c1c3c961a483589c8706b988ae935 (vhost: /, messages: 0) 2026-02-08 06:25:02.945312 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - worker_fanout_9ca886b7d6b24685990d8781cecd066d (vhost: /, messages: 0) 2026-02-08 06:25:02.945329 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - worker_fanout_bf71874e30d242578fc8ccabdb012b67 (vhost: /, messages: 0) 2026-02-08 06:25:02.945365 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - worker_fanout_e27b0f5a558c447d98923eaf1bc241f3 (vhost: /, messages: 0) 2026-02-08 06:25:02.945385 | orchestrator | 2026-02-08 06:25:02 | INFO  |  - worker_fanout_fcbfff2895144817ad8fd609c520bf13 (vhost: /, messages: 0) 2026-02-08 06:25:03.260588 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-08 06:25:05.262205 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-08 06:25:05.262286 | orchestrator | [--no-close-connections] [--quorum] 2026-02-08 06:25:05.262299 | orchestrator | [--vhost VHOST] 2026-02-08 06:25:05.262308 | orchestrator | [{list,delete,prepare,check}] 2026-02-08 06:25:05.262317 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-08 06:25:05.262327 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-08 06:25:06.012765 | orchestrator | ERROR 2026-02-08 06:25:06.012977 | orchestrator | { 2026-02-08 06:25:06.013012 | orchestrator | "delta": "1:18:06.550563", 2026-02-08 06:25:06.013036 | orchestrator | "end": "2026-02-08 06:25:05.598372", 2026-02-08 06:25:06.013057 | orchestrator | "msg": "non-zero return code", 2026-02-08 06:25:06.013075 | orchestrator | "rc": 2, 2026-02-08 06:25:06.013093 | orchestrator | "start": "2026-02-08 05:06:59.047809" 2026-02-08 06:25:06.013110 | orchestrator | } failure 2026-02-08 06:25:06.285894 | 2026-02-08 06:25:06.286020 | PLAY RECAP 2026-02-08 06:25:06.286076 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-08 06:25:06.286100 | 2026-02-08 06:25:06.525926 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-08 06:25:06.527069 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-08 06:25:07.361107 | 2026-02-08 06:25:07.361273 | PLAY [Post output play] 2026-02-08 06:25:07.378710 | 2026-02-08 06:25:07.378879 | LOOP [stage-output : Register sources] 2026-02-08 06:25:07.449700 | 2026-02-08 06:25:07.450042 | TASK [stage-output : Check sudo] 2026-02-08 06:25:08.333217 | orchestrator | sudo: a password is required 2026-02-08 06:25:08.490790 | orchestrator | ok: Runtime: 0:00:00.017470 2026-02-08 06:25:08.507466 | 2026-02-08 06:25:08.507636 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-08 06:25:08.547382 | 2026-02-08 06:25:08.547701 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-08 06:25:08.625632 | orchestrator | ok 2026-02-08 06:25:08.634648 | 2026-02-08 06:25:08.634789 | LOOP [stage-output : Ensure target folders exist] 2026-02-08 06:25:09.106586 | orchestrator | ok: "docs" 2026-02-08 06:25:09.107011 | 2026-02-08 06:25:09.359961 | orchestrator | ok: "artifacts" 2026-02-08 06:25:09.604964 | orchestrator | ok: "logs" 2026-02-08 06:25:09.617361 | 2026-02-08 06:25:09.617532 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-08 06:25:09.652742 | 2026-02-08 06:25:09.652992 | TASK [stage-output : Make all log files readable] 2026-02-08 06:25:09.944078 | orchestrator | ok 2026-02-08 06:25:09.953178 | 2026-02-08 06:25:09.953308 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-08 06:25:09.987952 | orchestrator | skipping: Conditional result was False 2026-02-08 06:25:09.997455 | 2026-02-08 06:25:09.997573 | TASK [stage-output : Discover log files for compression] 2026-02-08 06:25:10.021409 | orchestrator | skipping: Conditional result was False 2026-02-08 06:25:10.037138 | 2026-02-08 06:25:10.037285 | LOOP [stage-output : Archive everything from logs] 2026-02-08 06:25:10.086194 | 2026-02-08 06:25:10.086399 | PLAY [Post cleanup play] 2026-02-08 06:25:10.096298 | 2026-02-08 06:25:10.096410 | TASK [Set cloud fact (Zuul deployment)] 2026-02-08 06:25:10.163050 | orchestrator | ok 2026-02-08 06:25:10.173923 | 2026-02-08 06:25:10.174035 | TASK [Set cloud fact (local deployment)] 2026-02-08 06:25:10.207856 | orchestrator | skipping: Conditional result was False 2026-02-08 06:25:10.223377 | 2026-02-08 06:25:10.223513 | TASK [Clean the cloud environment] 2026-02-08 06:25:10.839290 | orchestrator | 2026-02-08 06:25:10 - clean up servers 2026-02-08 06:25:11.708660 | orchestrator | 2026-02-08 06:25:11 - testbed-manager 2026-02-08 06:25:11.795592 | orchestrator | 2026-02-08 06:25:11 - testbed-node-1 2026-02-08 06:25:11.886807 | orchestrator | 2026-02-08 06:25:11 - testbed-node-3 2026-02-08 06:25:11.971914 | orchestrator | 2026-02-08 06:25:11 - testbed-node-4 2026-02-08 06:25:12.071137 | orchestrator | 2026-02-08 06:25:12 - testbed-node-5 2026-02-08 06:25:12.166742 | orchestrator | 2026-02-08 06:25:12 - testbed-node-0 2026-02-08 06:25:12.259836 | orchestrator | 2026-02-08 06:25:12 - testbed-node-2 2026-02-08 06:25:12.352721 | orchestrator | 2026-02-08 06:25:12 - clean up keypairs 2026-02-08 06:25:12.369047 | orchestrator | 2026-02-08 06:25:12 - testbed 2026-02-08 06:25:12.393799 | orchestrator | 2026-02-08 06:25:12 - wait for servers to be gone 2026-02-08 06:25:23.389793 | orchestrator | 2026-02-08 06:25:23 - clean up ports 2026-02-08 06:25:23.594918 | orchestrator | 2026-02-08 06:25:23 - 0d326b52-410c-4dad-b471-73f32f7ff302 2026-02-08 06:25:24.082408 | orchestrator | 2026-02-08 06:25:24 - 7584fcd7-1d04-4a73-ac6b-3021b0f4aa61 2026-02-08 06:25:24.368821 | orchestrator | 2026-02-08 06:25:24 - 78377196-be53-4ed6-8ce0-e05fe3d0ef20 2026-02-08 06:25:24.637796 | orchestrator | 2026-02-08 06:25:24 - 7b322e72-63b6-4d42-98cf-f26637b8bc02 2026-02-08 06:25:24.853965 | orchestrator | 2026-02-08 06:25:24 - 896138bb-184b-4f8e-8dd7-d6b2e33b1676 2026-02-08 06:25:25.059507 | orchestrator | 2026-02-08 06:25:25 - b84af7e9-2215-446f-8321-e319eed38701 2026-02-08 06:25:25.273503 | orchestrator | 2026-02-08 06:25:25 - c089bfed-18e5-4e2e-84c6-fada33c1c978 2026-02-08 06:25:25.484245 | orchestrator | 2026-02-08 06:25:25 - clean up volumes 2026-02-08 06:25:25.602295 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-1-node-base 2026-02-08 06:25:25.639830 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-3-node-base 2026-02-08 06:25:25.680012 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-manager-base 2026-02-08 06:25:25.722171 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-4-node-base 2026-02-08 06:25:25.763559 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-0-node-base 2026-02-08 06:25:25.801684 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-5-node-base 2026-02-08 06:25:25.841131 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-2-node-base 2026-02-08 06:25:25.881357 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-8-node-5 2026-02-08 06:25:25.922691 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-6-node-3 2026-02-08 06:25:25.966413 | orchestrator | 2026-02-08 06:25:25 - testbed-volume-3-node-3 2026-02-08 06:25:26.008094 | orchestrator | 2026-02-08 06:25:26 - testbed-volume-7-node-4 2026-02-08 06:25:26.226229 | orchestrator | 2026-02-08 06:25:26 - testbed-volume-5-node-5 2026-02-08 06:25:26.267455 | orchestrator | 2026-02-08 06:25:26 - testbed-volume-2-node-5 2026-02-08 06:25:26.308733 | orchestrator | 2026-02-08 06:25:26 - testbed-volume-1-node-4 2026-02-08 06:25:26.349847 | orchestrator | 2026-02-08 06:25:26 - testbed-volume-4-node-4 2026-02-08 06:25:26.390560 | orchestrator | 2026-02-08 06:25:26 - testbed-volume-0-node-3 2026-02-08 06:25:26.431168 | orchestrator | 2026-02-08 06:25:26 - disconnect routers 2026-02-08 06:25:26.542786 | orchestrator | 2026-02-08 06:25:26 - testbed 2026-02-08 06:25:27.404752 | orchestrator | 2026-02-08 06:25:27 - clean up subnets 2026-02-08 06:25:27.446612 | orchestrator | 2026-02-08 06:25:27 - subnet-testbed-management 2026-02-08 06:25:27.610516 | orchestrator | 2026-02-08 06:25:27 - clean up networks 2026-02-08 06:25:27.759144 | orchestrator | 2026-02-08 06:25:27 - net-testbed-management 2026-02-08 06:25:28.047392 | orchestrator | 2026-02-08 06:25:28 - clean up security groups 2026-02-08 06:25:28.089998 | orchestrator | 2026-02-08 06:25:28 - testbed-node 2026-02-08 06:25:28.223486 | orchestrator | 2026-02-08 06:25:28 - testbed-management 2026-02-08 06:25:28.333542 | orchestrator | 2026-02-08 06:25:28 - clean up floating ips 2026-02-08 06:25:28.370404 | orchestrator | 2026-02-08 06:25:28 - 81.163.193.37 2026-02-08 06:25:28.725790 | orchestrator | 2026-02-08 06:25:28 - clean up routers 2026-02-08 06:25:28.848760 | orchestrator | 2026-02-08 06:25:28 - testbed 2026-02-08 06:25:30.285262 | orchestrator | ok: Runtime: 0:00:19.607694 2026-02-08 06:25:30.291043 | 2026-02-08 06:25:30.291228 | PLAY RECAP 2026-02-08 06:25:30.291381 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-08 06:25:30.291461 | 2026-02-08 06:25:30.422536 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-08 06:25:30.424302 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-08 06:25:31.166426 | 2026-02-08 06:25:31.166588 | PLAY [Cleanup play] 2026-02-08 06:25:31.182414 | 2026-02-08 06:25:31.182549 | TASK [Set cloud fact (Zuul deployment)] 2026-02-08 06:25:31.237865 | orchestrator | ok 2026-02-08 06:25:31.246449 | 2026-02-08 06:25:31.246596 | TASK [Set cloud fact (local deployment)] 2026-02-08 06:25:31.281483 | orchestrator | skipping: Conditional result was False 2026-02-08 06:25:31.300017 | 2026-02-08 06:25:31.300202 | TASK [Clean the cloud environment] 2026-02-08 06:25:32.476920 | orchestrator | 2026-02-08 06:25:32 - clean up servers 2026-02-08 06:25:32.971661 | orchestrator | 2026-02-08 06:25:32 - clean up keypairs 2026-02-08 06:25:32.991197 | orchestrator | 2026-02-08 06:25:32 - wait for servers to be gone 2026-02-08 06:25:33.038165 | orchestrator | 2026-02-08 06:25:33 - clean up ports 2026-02-08 06:25:33.121027 | orchestrator | 2026-02-08 06:25:33 - clean up volumes 2026-02-08 06:25:33.185134 | orchestrator | 2026-02-08 06:25:33 - disconnect routers 2026-02-08 06:25:33.217542 | orchestrator | 2026-02-08 06:25:33 - clean up subnets 2026-02-08 06:25:33.236720 | orchestrator | 2026-02-08 06:25:33 - clean up networks 2026-02-08 06:25:33.393691 | orchestrator | 2026-02-08 06:25:33 - clean up security groups 2026-02-08 06:25:33.427156 | orchestrator | 2026-02-08 06:25:33 - clean up floating ips 2026-02-08 06:25:33.455961 | orchestrator | 2026-02-08 06:25:33 - clean up routers 2026-02-08 06:25:33.840562 | orchestrator | ok: Runtime: 0:00:01.393320 2026-02-08 06:25:33.844707 | 2026-02-08 06:25:33.844869 | PLAY RECAP 2026-02-08 06:25:33.844993 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-08 06:25:33.845056 | 2026-02-08 06:25:33.975170 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-08 06:25:33.976226 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-08 06:25:34.703860 | 2026-02-08 06:25:34.704027 | PLAY [Base post-fetch] 2026-02-08 06:25:34.720072 | 2026-02-08 06:25:34.720215 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-08 06:25:34.765538 | orchestrator | skipping: Conditional result was False 2026-02-08 06:25:34.772467 | 2026-02-08 06:25:34.772653 | TASK [fetch-output : Set log path for single node] 2026-02-08 06:25:34.821450 | orchestrator | ok 2026-02-08 06:25:34.830557 | 2026-02-08 06:25:34.830734 | LOOP [fetch-output : Ensure local output dirs] 2026-02-08 06:25:35.339156 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/c9ca016b06f14a8483ea3b09e15b25d8/work/logs" 2026-02-08 06:25:35.601763 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c9ca016b06f14a8483ea3b09e15b25d8/work/artifacts" 2026-02-08 06:25:35.874806 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c9ca016b06f14a8483ea3b09e15b25d8/work/docs" 2026-02-08 06:25:35.894166 | 2026-02-08 06:25:35.894308 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-08 06:25:36.797674 | orchestrator | changed: .d..t...... ./ 2026-02-08 06:25:36.798011 | orchestrator | changed: All items complete 2026-02-08 06:25:36.798070 | 2026-02-08 06:25:37.520504 | orchestrator | changed: .d..t...... ./ 2026-02-08 06:25:38.232960 | orchestrator | changed: .d..t...... ./ 2026-02-08 06:25:38.265870 | 2026-02-08 06:25:38.266004 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-08 06:25:38.301793 | orchestrator | skipping: Conditional result was False 2026-02-08 06:25:38.304567 | orchestrator | skipping: Conditional result was False 2026-02-08 06:25:38.328206 | 2026-02-08 06:25:38.328314 | PLAY RECAP 2026-02-08 06:25:38.328388 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-08 06:25:38.328426 | 2026-02-08 06:25:38.455515 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-08 06:25:38.458003 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-08 06:25:39.200854 | 2026-02-08 06:25:39.201025 | PLAY [Base post] 2026-02-08 06:25:39.216082 | 2026-02-08 06:25:39.216233 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-08 06:25:40.244811 | orchestrator | changed 2026-02-08 06:25:40.253444 | 2026-02-08 06:25:40.253558 | PLAY RECAP 2026-02-08 06:25:40.253648 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-08 06:25:40.253719 | 2026-02-08 06:25:40.367705 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-08 06:25:40.368730 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-08 06:25:41.149555 | 2026-02-08 06:25:41.149739 | PLAY [Base post-logs] 2026-02-08 06:25:41.160750 | 2026-02-08 06:25:41.160888 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-08 06:25:41.616574 | localhost | changed 2026-02-08 06:25:41.634036 | 2026-02-08 06:25:41.634211 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-08 06:25:41.672768 | localhost | ok 2026-02-08 06:25:41.679434 | 2026-02-08 06:25:41.679659 | TASK [Set zuul-log-path fact] 2026-02-08 06:25:41.696683 | localhost | ok 2026-02-08 06:25:41.708106 | 2026-02-08 06:25:41.708226 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-08 06:25:41.744355 | localhost | ok 2026-02-08 06:25:41.749503 | 2026-02-08 06:25:41.749677 | TASK [upload-logs : Create log directories] 2026-02-08 06:25:42.270108 | localhost | changed 2026-02-08 06:25:42.276580 | 2026-02-08 06:25:42.276796 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-08 06:25:42.793658 | localhost -> localhost | ok: Runtime: 0:00:00.007037 2026-02-08 06:25:42.798341 | 2026-02-08 06:25:42.798452 | TASK [upload-logs : Upload logs to log server] 2026-02-08 06:25:43.368476 | localhost | Output suppressed because no_log was given 2026-02-08 06:25:43.372235 | 2026-02-08 06:25:43.372406 | LOOP [upload-logs : Compress console log and json output] 2026-02-08 06:25:43.432068 | localhost | skipping: Conditional result was False 2026-02-08 06:25:43.437285 | localhost | skipping: Conditional result was False 2026-02-08 06:25:43.444269 | 2026-02-08 06:25:43.444453 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-08 06:25:43.505392 | localhost | skipping: Conditional result was False 2026-02-08 06:25:43.505995 | 2026-02-08 06:25:43.510091 | localhost | skipping: Conditional result was False 2026-02-08 06:25:43.517737 | 2026-02-08 06:25:43.517946 | LOOP [upload-logs : Upload console log and json output]